Department of Computer Science and Automation Department of Computer Science and Automation, IISc, Bangalore, India Indian Institute of Science
HOME | ABOUT US | PEOPLE | RESEARCH | ACADEMICS | FACILITIES | EVENTS / SEMINARS | NEWS | CONTACT US

UPCOMING SEMINARS

Series: Ph.D. Colloquium
Title: Falcon:- A Graph Manipulation Language for Distributed Heterogeneous Systems

  • Speaker: Unnikrishnan C.
  • Faculty Advisor: Y.N. Srikant
  • Date and Time: Tuesday, July 25, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Graphs model relationships across real-world entities in web graphs, social network graphs, and road network graphs. Graph algorithms analyze and transform a graph to discover graph properties or to apply a computation. For instance, a pagerank algorithm computes a rank for each page in a webgraph, and a community detection algorithm discovers likely communities in a social network, while a shortest path algorithm computes the quickest way to reach a place from another, in a road network. In Domains such as social information systems, the number of edges can be in billions or trillions. Such large graphs are processed on distributed computer systems or clusters.

Graph algorithms can be executed on multicore-CPUs, GPUs with thousands of cores, multi-GPU devices, and CPU+GPU clusters, depending on the size of the graph object. While programming such algorithms on heterogeneous targets, a programmer is required to deal with parallelism and and also manage explicit data communication between distributed devices. This implies that a programmer is required to learn CUDA, OpenMP, MPI, etc., and also the details of the hardware architecture. Such codes are error prone and difficult to debug. A Domain Specific Language (DSL) which hides all the hardware details and lets the programmer concentrate only the algorithmic logic will be very useful.

With this as the research goal, Falcon, graph DSL and its compiler have been developed. Falcon programs are explicitly parallel and Falcon hides all the hardware details from the programmer. Large graphs that do not fit into the memory of a single device are automatically partitioned by the Falcon compiler. Another feature of Falcon is that it supports mutation of graph objects and thus enables programming dynamic graph algorithms. The Falcon compiler converts a single DSL code to heterogeneous targets such as multi-core CPU, GPU, multi-GPU device, and CPU+GPU cluster. Falcon compiled codes match or outperform state-of-the-art graph frameworks for different target platforms and benchmarks.

Speaker Bio:
Unnikrishnan is a Ph.D student in the CSA dpeartment. His areas of interest are compilation for heterogeneous architectures and compiler optimizations.

Host Faculty: Y.N. Srikant

Video Archive Go Top

 

PAST SEMINARS

Series: Ph.D. Colloquium
Title: Data Structures and Algorithms to Analyze Concurrency in Android Applications

  • Speaker: Ms. Pallavi Maiya H P
                   PhD Student
  • Faculty Advisor: Prof. Aditya Kanade
  • Date and Time: Tuesday, July 18, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Android is a popular mobile operating system, providing a rich ecosystem for the development of applications which run on the Android platform. Entities such as the device user, network and sensors interact continuously with the mobile device causing an Android application to face a continuous stream of input (events). In spite of having to perpetually handle a huge volume of events, Android applications are expected to be responsive to new events from the environment. The Android framework achieves this by exposing applications to a concurrency model which combines multi-threading with asynchronous event-based dispatch. While this enables development of efficient applications, unforeseen thread interleavings combined with non-deterministic ordering of events can lead to subtle concurrency bugs in Android applications.

In an Android application, threads communicate through shared variables and by sending events to each others event queues. Even though typically a thread processes events in its event queue and invokes corresponding event handlers in a FIFO order, this ordering however can be altered by attributes such as delays and priorities associated with the events. Existing literature extensively describe techniques to detect concurrency bugs such as data races, deadlocks and atomicity violations in multi-threaded programs. Recent works also present techniques to analyze bugs manifested due to single-threaded event-driven concurrency. However, the complex event-driven semantics of Android applications combined with the thread-based semantics render a naive extension of such concurrency analysis techniques either less effective or unsound for Android applications. This thesis takes the initial steps towards developing data structures and algorithms to facilitate effective dynamic concurrency analysis of Android applications.

We firstly formalize the concurrency behaviour of Android applications by giving concurrency semantics. Using this semantics we derive a set of rules to reason about the happens-before ordering between operations in an Android execution trace. These rules induce a partial order called a happens-before (HB) relation on the operations of an execution trace. Our HB relation generalizes the so far independently studied HB relations for multi-threaded programs and single-threaded event-driven programs. We have developed a happens-before based dynamic race detector for Android applications, called DroidRacer, which detects data races across threads as well as race conditions between memory accesses from different event handlers on the same thread. DroidRacer has detected several races in various Android applications.

We identify a major bottleneck in the computation of the HB relation for Android applications, that of discovering HB order between event handlers executed on the same thread. HB order between events in Android is characterized by complex HB rules which order a pair of event handlers by inspecting, for example, the order between operations which enqueued the corresponding events. Android applications usually receive a large number of events even within a short span of time, which increases the cost of evaluating such HB rules. As a result HB computation using standard data structures such as vector clocks alone becomes inefficient in case of Android applications. We propose a novel data structure, called "event graph", that maintains a subset of the HB relation to efficiently infer order between any pair of events. We present an algorithm, called EventTrack, which improves efficiency of vector clock based HB computation for event-driven programs using event graphs. Evaluation of EventTrack against a state-of-the-art HB computation technique for Android applications, yielded significant speedup.

The scope of happens-before based dynamic race detector is limited to the thread interleaving corresponding to the execution trace inspected. However, a systematic state space exploration technique such as a stateless model checker can explore all the thread interleavings, and also identify harmful manifestations of data races which may happen only when multiple racing memory accesses are performed in a particular order. Partial order reduction (POR) is a technique used by stateless model checkers to reduce the state space to be explored while preserving certain interesting program properties. The existing POR techniques selectively explore different thread interleavings only to reorder pairs of racing operations from different threads. However, they fail to follow this strategy for events and hence end up eagerly exploring all possible ordering between event handlers executed on the same thread, even if reordering them does not lead to different states. We propose a new POR technique based on a novel backtracking set called the "dependence-covering set". Events handled by the same thread are reordered by our POR technique only if necessary. We prove that exploring dependence-covering sets suffices to detect all deadlock cycles and assertion violations. To evaluate effectiveness of our POR scheme, we have implemented a dynamic algorithm to compute dependence-covering sets. On execution traces obtained from a few Android applications, we demonstrate that our technique explores many fewer transitions —often orders of magnitude fewer— compared to exploration based on persistent sets, a popular POR technique which explores all possible orderings between events. The tools, (i) Droidracer, a race detector for Android applications, (ii) a standalone implementation of EventTrack, to compute HB relation over Android execution traces, and (iii) EM-Explorer --- a proof-of-concept model checking framework which simulates the non-deterministic behaviour exhibited by Android applications, developed by this thesis have been made public.

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: Towards a characterization of the symmetries of Nisan-Wigderson polynomial family

  • Speaker: Mr. Nikhil Gupta
                   M.Sc. (Engg)student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Dr. Chandan Saha
  • Date and Time: Friday, July 14, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Understanding the structure and complexity of a polynomial family is a fundamental problem of arithmetic circuit complexity. There are various approaches like studying the lower bounds,which deals with finding the smallest circuit required to compute a polynomial, studying the orbit and stabilizer of a polynomial with respect to an invertible transformation etc to do this. We have a rich understanding of some of the well known polynomial families like determinant, permanent, IMM etc. In this thesis we study some of the structural properties of the polyno-mial family called the Nisan-Wigderson polynomial family. This polynomial family is inspired from a well known combinatorial design called Nisan-Wigderson design and is recently used to prove strong lower bounds on some restricted classes of arithmetic circuits ([Kayal-Limaye-Saha-Srinivasan(14)], [Kayal-Saha-Saptharishi(14)],[Kayal-Saha-Tavenas(16)]). But unlike determinant, permanent, IMM etc, our understanding of the Nisan-Wigderson polynomial family is inadequate. For example we do not know if this polynomial family is in VP or VNP complete or VNP-intermediate assuming VP not equal to VNP, nor do we have an understanding of the complexity of its equivalence test. We hope that the knowledge of some of the inherent properties of Nisan- Wigderson polynomial like group of symmetries and Lie algebra would provide us some insights in this regard.

An invertible square matrix A of size n^2 is called a symmetry of an n-variate polynomial f if f(A·X) = f(X), where X is the set of n variables. The set of symmetries of f forms a subgroup of general linear group, which is also known as group of symmetries of f, denoted G_f . A vector space is attached to G_f to get the complete understanding of the symmetries of f. This vector space is known as Lie algebra of group of symmetries of f (or Lie algebra of f), represented as L_f . Lie algebra of f contributes some elements of G_f, known as continuous symmetries of f. Lie algebra has also been instrumental in designing efficient randomized equivalence tests for some polynomial families like determinant, permanent, IMM etc ([Kayal(11)], [Kayal-Nair-Saha-Tavenas(17)]).

In this work we completely characterize the Lie algebra of Nisan-Wigderson polynomial family. We show that L_NW contains diagonal matrices of a specific type. The Lie algebra L_NW turns out to be a proper linear subspace of L_Perm , the Lie algebra of the symbolic Permanent polynomial. The knowledge of L_NW not only helps us to completely figure out the continuous symmetries of the Nisan-Wigderson polynomial family, but also gives some crucial insights into the other symmetries of Nisan-Wigderson polynomial (i.e. the discrete symmetries). Thereafter using Hessian matrix of Nisan-Wigderson polynomial and the concept of evaluation dimension, we are able to almost completely identify the structure of G_NW . In particular we prove that any A in G_NW is a product of diagonal and permutation matrices of certain kind that we call block-permuted permutation matrix. Finally, we give explicit examples of nontrivial block-permuted permutation matrices using the Frobenious automorphisms that establishes the richness of the discrete symmetries of the Nisan-Wigderson polynomial family.

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Grobner Basis Algorithms for Polynomial Ideal Theory over Noetherian Commutative Rings

  • Speaker: Ms. Maria Francis
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Thursday, July 13, 2017, 2:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
One of the fundamental problems in commutative algebra and algebraic geometry is to understand the nature of the solution space of a system of multivariate polynomial equations over a field $Bbbk$, such as real or complex numbers. An important algorithmic tool in this study is the notion of Gr"obner bases citep{Buchberger:1965:thesis}. Given a system of polynomial equations, $f_1 =0, ldots, f_m =0$, Gr"obner basis is a ``canonical" generating set of the ideal generated by $f_1, ldots, f_m$, that can answer, constructively, many questions in computational ideal theory. It generalizes several concepts of univariate polynomials like resultants to the multivariate case, and answers decisively the ideal membership problem. The dimension of the solution set of an ideal $mathfrak{a}$ called the affine variety, an important concept in algebraic geometry, is equal to the Krull dimension of the corresponding coordinate ring, $Bbbk[x_1, ldots,x_n]/mathfrak{a}$. Gr"obner bases were first introduced to compute $Bbbk$-vector space bases of $Bbbk[x_1, ldots,x_n]/mathfrak{a}$ and use that to characterize zero-dimensional solution sets. Since then, Gr"obner basis techniques have provided a generic algorithmic framework for computations in control theory, cryptography, formal verification, robotics, etc, that involve multivariate polynomials over fields.

The main aim of this thesis is to study problems related to computational ideal theory over Noetherian commutative rings (e.g: the ring of integers, $mathbb{Z}$, the polynomial ring over a field, $Bbbk[y_1, ldots, y_m]$, etc ) using the theory of Gr"obner bases. These problems surface in many domains including lattice based cryptography, control systems, system-on-chip design, etc. Although, formal and standard techniques are available for polynomial rings over fields, the presence of zero divisors and non units make developing similar techniques for polynomial rings over rings challenging.

Given a polynomial ring over a Noetherian commutative ring, $A$ and an ideal $mathfrak{a}$ in $A[x_1, ldots, x_n]$, the first fundamental problem that we study is whether the residue class polynomial ring, $A[x_1, ldots, x_n]/mathfrak{a}$ is a free $A$-module or not. Note that when $A=Bbbk$, the answer is always `yes' and the $Bbbk$-vector space basis of $Bbbk[x_1, ldots, x_n]/mathfrak{a}$ plays an important role in computational ideal theory over fields. In our work, we give a Gr"obner basis characterization for $A[x_1, ldots,x_n]/mathfrak{a}$ to have a free $A$-module representation w.r.t. a monomial ordering. For such $A$-algebras, we give an algorithm to compute its $A$-module basis. This extends the Macaulay-Buchberger basis theorem to polynomial rings over Noetherian commutative rings. These results help us develop a theory of border bases in $A[x_1, ldots,x_n]$ when the residue class polynomial ring is finitely generated. The theory of border bases is handled as two separate cases: (i) $A[x_1, ldots,x_n]/mathfrak{a}$ is free and (ii) $A[x_1, ldots,x_n]/mathfrak{a}$ has torsion submodules.

For the special case of $A = mathbb{Z}$, we show how short reduced Gr"obner bases and the characterization for a free $A$-module representation help identify the cases when $mathbb{Z}[x_1, ldots,x_n]/mathfrak{a}$ is isomorphic to $mathbb{Z}^N$ for some $Nin mathbb{N}$. Ideals in such $mathbb{Z}$-algebras are called ideal lattices. These structures are interesting since this means we can use the algebraic structure, $mathbb{Z}[x_1, ldots, x_n]/mathfrak{a}$ as a representation for point lattices and extend all the computationally hard problems in point lattice theory to $mathbb{Z}[x_1, ldots, x_n]/mathfrak{a}$. Univariate ideal lattices are widely used in lattice based cryptography for they are a more compact representation for lattices than matrices. In this thesis, we give a characterization for multivariate ideal lattices and construct collision resistant hash functions based on them using Gr"obner basis techniques. For the construction of hash functions, we define a worst case problem, shortest substitution problem w.r.t. an ideal in $mathbb{Z}[x_1,ldots, x_n]$, and establish hardness results for this problem.

Finally, we develop an approach to compute the Krull dimension of $A[x_1,ldots,x_n]/mathfrak{a}$ using Gr"obner bases, when $A$ is a Noetherian integral domain. When $A$ is a field, the Krull dimension of $A[x_1,ldots,x_n]/mathfrak{a}$ has several equivalent algorithmic definitions by which it can be computed. But this is not true in the case of arbitrary Noetherian rings. We introduce the notion of combinatorial dimension of $A[x_1, ldots, x_n]/mathfrak{a}$ and give a Gr"obner basis method to compute it for residue class polynomial rings that have a free $A$-module representation w.r.t. a lexicographic ordering. For such $A$-algebras, we derive a relation between Krull dimension and combinatorial dimension of $A[x_1, ldots, x_n]/mathfrak{a}$. For $A$-algebras that have a free $A$-module representation w.r.t. degree compatible monomial orderings, we introduce the concepts of Hilbert function, Hilbert series and Hilbert polynomials and show that Gr"obner basis methods can be used to compute these quantities. We then proceed to show that the combinatorial dimension of such $A$-algebras is equal to the degree of the Hilbert polynomial. This enables us to extend the relation between Krull dimension and combinatorial dimension to $A$-algebras with a free $A$-module representation w.r.t. a degree compatible ordering as well.

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: Fully Resilient Non-Interactive Id-based Hierarchical Key Agreement

  • Speaker: Mayank Tiwari (MSc Engg.)
  • Faculty Advisor: Dr. Sanjit Chatterjee
  • Date and Time: Tuesday, July 11, 2017, 12:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Non-Interactive Key Agreement (NIKA) is a cryptographic primitive which allows two parties to agree on a shared key without any interaction. Identity-based Non-Interactive Key Agreement (ID-NIKA) allows each party to compute shared secret key using its own secret key and the peer's identity. ID-NIKA can be used to establish shared secret keys in Ad-hoc networks using minimal battery power and communication.

Mobile Ad-hoc NETwork (MANET) is a network of mobile and moderately resource constrained devices communicating through a wireless medium. Examples of standard MANET devices are laptops, cellphones etc. Due to the inherent characteristics like mobility, dynamic topology and lack of centralized infrastructure, MANETs face some serious security issues. We are particularly interested about ID-NIKA in MANETs. This is of crucial interest for secure communication between two nodes in MANETs.

In 2008, Gennaro et al. introduced a scheme called Hybrid Hierarchical Key Agreement Scheme (HH-KAS). HH-KAS uses a linear hierarchical key agreement scheme at the non-leaf levels and a key agreement scheme due to Sakai et al. (referred as SOK-KAS) at the leaf level. HH-KAS is (i) non-interactive, (ii) identity based, (iii) hierarchical and (iv) fully resilient against node compromises at leaf level and resilient against node compromises upto certain threshold values in non-leaf levels. Thus one can say that HH-KAS is partially resilient against node compromises. In their paper the authors claim that there is no key agreement scheme for MANETs in the literature, with all above four properties. This was motivated as an interesting open problem in this area.

Guo et al. proposed a scheme known as Strong Key Agreement Scheme (SKAS) in 2012. The authors claimed it a potential solution to the open problem posed by Gennaro et al. in their work. However, in 2014, Zhu et al. showed a concrete attack on SKAS. This attack makes SKAS practically useless for real life applications.

Our main contribution is a hybrid scheme using two existing schemes. Our scheme uses a deterministic key pre-distribution scheme by Lee and Stinson termed as Basic Id One-way function Scheme (BIOS) at level 1 (where root is at level 0). Beyond level 1, we use SOK-KAS for key agreement. We refer our scheme as BIOS-SOK key agreement. BIOS and SOK schemes satisfy properties (i), (ii) and (iv) but none of them is hierarchical in nature. In our work we have made an amalgam of both schemes which is hierarchical in nature. Thus, BIOS-SOK scheme satisfies (i), (ii), (iii) and is also fully resilient against arbitrary number of node compromises at any level.

BIOS-SOK scheme also possesses the benefits of low space requirement, low shared key computation time and better scalability for many real-life applications when compared with the scheme of Gennaro et al. In HH-KAS, the key agreement is carried out only at the leaf level. In BIOS-SOK scheme, any two nodes in the hierarchy (at same or different levels) can compute shared secret key. We also provide a rigorous security analysis for our scheme in a stronger security model compared to the security model used for HH-KAS.

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: Improved lower bounds for depth four arithmetic circuits.

  • Speaker: Mr. Abhijat Sharma
                   M.Sc. (Engg)
  • Faculty Advisor: Dr . Chandan Saha
  • Date and Time: Tuesday, July 11, 2017, 11:00 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
We study the problem of proving lower bounds for depth four arithmetic circuits. Depth four circuits have been receiving much attraction when it comes to recent circuit lower bound results, as a result of the series of results culminating in a proof that strong enough lower bounds for depth four circuits will imply super-polynomial lower bounds for general arithmetic circuits, and hence solve one of the most central open problems in algebraic complexity i.e a separation between the VP and VNP complexity classes.

However despite several efforts, the best known lower bounds for general circuits remains Omega(N logN), where N is the number of input variables.

In the case of arithmetic formulas, an O(N^2) lower bound is known, which has not seen any improvement since then.

The situation for depth three arithmetic circuits was also similar for many years, until a recent result that achieved an almost cubic lower bound improving upon the previous best quadratic lower bound.

However, unlike depth three circuits, proving a super-quadratic lower bound for depth four circuits remains a prevalent open problem for many years.

Previous results implied a super-linear lower bound of Omega(N^{1.33}) for depth four circuits. As the main contribution of this thesis, we improve upon this super-linear bound and prove an Omega(N^(1.5)) lower bound on the size of depth four circuits, computing an explicit N-variate polynomial in VNP with degree d = O(sqrt{N}).

Our approach offers a potential route to proving a super-quadratic lower bound for depth four circuits. Taking cue from the numerous successful results recently, we use the technique of the shifted partial derivatives measures to achieve the said lower bound.

Particularly, we use the Dimension of Projected Shifted Partials (DPSP) measure. Coming to the choice of the hard polynomial, we again follow the status quo and use a variant of the Nisan-Wigderson (NW) polynomial family that has proved to be very helpful over the past few years in arithmetic circuit complexity.

Finally, we do a careful analysis of two previous work on general depth four circuits and compare their techniques to ours.

We conclude that our result can potentially be used as a starting point, and techniques similar to that of the almost cubic lower bound result for depth three circuits can likely be used to strengthen our lower bound to Omega(N^(2.5)), for general depth four arithmetic circuits.

Video Archive Go Top

 

Series: Department Seminar
Title: Approximating the Sparsest Cut on Expander-Like Graphs

  • Speaker: Dr. Rakesh Venkat
                   Postdoctoral Fellow
                   Hebrew University of Jerusalem, Israel
  • Date and Time: Monday, July 10, 2017, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The Sparsest-Cut problem is a fundamental NP-Hard problem in graph optimization that has many practical applications. Simultaneously, on the theoretical side, designing approximation algorithms for it has revealed intriguing connections to questions in geometry involving metric spaces and embeddings. The best known algorithm for the Sparsest Cut problem due to Arora, Rao and Vazirani (2004) gives a $O(sqrt{log n})$ approximation on n-vertex graphs, and this is believed to be the best possible in general. However, one could ask if better approximation algorithms exist for specific graph classes. In this talk, I will survey the relevant background, and describe better approximation algorithms for the Sparsest Cut problem on an important class of graphs called low threshold-rank graphs, that are generalizations of the well-known class of expander graphs. (Based on joint works with Amit Deshpande, Prahladh Harsha and Yuval Rabani).

Speaker Bio:
Rakesh Venkat is currently a Postdoctoral Fellow at the Hebrew University of Jerusalem, Israel with Prof. Yuval Rabani. He received his PhD from TIFR, Mumbai under the supervision of Prof. Prahladh Harsha. His research interests include topics in approximation algorithms, complexity theory and communication complexity.

Host Faculty: Dr. Anand Louis

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: A Fine Grained Dynamic Information Flow Analysis for Android Apps

  • Speaker: Shyam Sankaran
  • Faculty Advisor: Prof. Aditya Kanade
  • Date and Time: Friday, July 07, 2017, 11:30 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Android has been steadily gaining popularity ever since its launch in 2008. One of the major factors for this is the easy availability of a large variety of apps. They range from simple apps such as calculator apps to apps which can help people maintain their schedules and thus manage many aspects of their lives. In addition, a lot of free apps are available to the user thanks to the power of in-app purchases and advertisements. However, these also raise many security concerns. Apps are privy to a lot of private information regarding the user, such as his contacts, location, etc. It is essential to ascertain that apps do not leak such information to untrustworthy entities. In order to solve this problem, there have been many static and dynamic analyses which aim to track private data accessed or generated by the app to its destination. Such analyses are commonly known as Information Flow analyses. Dynamic analysis techniques, such as TaintDroid, tracks private information and alerts the user when it is accessed by specific API calls. However, they do not track the path taken by the information, which can be useful in debugging and validation scenarios.

We propose a model to perform dynamic information flow analysis, inspired by FlowDroid and TaintDroid, at a fine granularity so that we can obtain the path taken by the information flow. The model uses path-edges to track the information flows during a dynamic run which can then be used to obtain the paths taken by the flows. We have implemented this model and used it to obtain path-edges which can then be seeded into FlowDroid to obtain sound and quicker results compared to stand-alone runs of FlowDroid. We tested the model on 10 real-world apps where we find on average about 20% of the total path-edges found by FlowDroid which gave us an average of 1.8x speed-up of the analysis.

Video Archive Go Top

 

Series: Department Seminar
Title: Testing and Analysis of Web Applications using Page Models

  • Speaker: Snigdha Athaiya
                   PhD student
  • Faculty Advisor: Prof. K V Raghavan
  • Date and Time: Wednesday, July 05, 2017, 4:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Web applications are difficult to analyze using code-based tools because data-flow and control-flow through the application occurs via both server-side code and client-side pages. Client-side pages are typically specified in a scripting language that is different from the main server-side language; moreover, the pages are generated dynamically from the scripts. To address these issues we propose a static-analysis approach that automatically constructs a ``model'' of each page in a given application. A page model is a code fragment in the same language as the server-side code, which faithfully over-approximates the possible elements of the page as well as the control-flows and data-flows due to these elements. The server-side code in conjunction with the page models then becomes a standard (non-web) program, thus amenable to analysis using standard code-based tools.

We have implemented our approach in the context of J2EE applications. We demonstrate the versatility and usefulness of our approach by applying three standard analysis tools on the resultant programs from our approach: a concolic-execution based model checker (JPF), a dynamic fault localization tool (Zoltar), and a static slicer (Wala).

*This is a practice talk for ISSTA 2017*

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Automatic Storage Optimization of Arrays in Affine Loop Nests

  • Speaker: Somashekaracharya G Bhaskaracharya
  • Faculty Advisor: Uday Kumar Reddy B
  • Date and Time: Wednesday, July 05, 2017, 11:30 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Efficient memory usage is crucial for data-intensive applications as a smaller memory footprint ensures better cache performance and allows one to run a larger problem size given a fixed amount of main memory. The solutions found by existing techniques for automatic storage optimization for arrays in affine loop-nests, which minimize the storage requirements for the arrays, are often far from good or optimal and could even miss nearly all storage optimization potential. In this work, we present a new automatic storage optimization framework and techniques that can be used to achieve intra-array as well as inter-array storage reuse within affine loop-nests with a pre-determined schedule.

Over the last two decades, several heuristics have been developed for achieving complex transformations of affine loop-nests using the polyhedral model. However, there are no comparably strong heuristics for tackling the problem of automatic memory footprint optimization. We tackle the problem of storage optimization for arrays by formulating it as one of finding the right storage partitioning hyperplanes: each storage partition corresponds to a single storage location. Statement-wise storage partitioning hyperplanes are determined that partition a unified global array space so that values with overlapping live ranges are not mapped to the same partition. Our integrated heuristic for exploiting intra-array as well as inter-array reuse opportunities is driven by a fourfold objective function that not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter-statement storage reuse.

We built an automatic polyhedral storage optimizer called SMO using our storage partitioning approach. Storage reduction factors and other results that we obtained from SMO demonstrate the effectiveness of our approach on several benchmarks drawn from the domains of image processing, stencil computations, high-performance computing, and the class of tiled codes in general. The reductions in storage requirement over previous approaches range from a constant factor to asymptotic in the loop blocking factor or array extents -- the latter being a dramatic improvement for practical purposes.

As an incidental and related topic, we also studied the problem of polyhedral compilation of graphical dataflow programs. While polyhedral techniques for program transformation are now used in several proprietary and open source compilers, most of the research on polyhedral compilation has focused on imperative languages such as C, where the computation is specified in terms of statements with zero or more nested loops and other control structures around them. Graphical dataflow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and referential transparency of dataflow languages impose a different set of challenges. In this work, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalent polyhedral representation. We then describe PolyGLoT, a framework for automatic transformation of dataflow programs that we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used dataflow programming languages. Results show that dataflow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30$times$ while running on an 8-core Intel system.

Speaker Bio:
Somashekaracharya G. B is a PhD (ERP) student in the Multicore Computing Lab, Department of Computer Science and Automation, Indian Institute of Science. He obtained his Bachelor's degree in Computer Science from R. V. College of Engineering, Bangalore in 2008. He is currently working as an Advanced Senior Software Engineer at National Instruments R&D, India. His interests are broadly in compilers, dataflow languages, and parallel programming.

Host Faculty: Uday Kumar Reddy B

Video Archive Go Top

 

Series: Department Seminar
Title: DomLock: A New Multi-Granularity Locking Technique for Hierarchies

  • Speaker: Prof. Rupesh Nasre
  • Date and Time: Wednesday, July 05, 2017, 10:00 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
This talk will present a new locking technique for hierarchies. Fine-grained and coarse-grained are two extreme ways of locking, and their generalization, multi-granularity locking (MGL) offers several options in between. MGL has been implemented in current database systems such as Oracle and MySQL using a traversal-based protocol called Intention Locks. I would highlight limitations of intention locks and propose a new way to perform MGL based on a DFS-like numbering of the hierarchy. Key to the technique is modeling of the hierarchical structure in numerical manner. The talk would end highlighting limitations of the proposed technique and a few interesting open problems.

Speaker Bio:
Rupesh Nasre is an Assistant Professor in the CSE department at IIT Madras. He completed PhD from IISc Bangalore and Post-Doctoral Fellowship from the University of Texas at Austin. His research focus is in Compilers and Parallelization. He is a recipient of the Young Faculty Recognition Award at IIT Madras, NVIDIA Special Prize for CodeForScience, a winner of the Yahoo! University Hack Day, and holds five US patents.

Host Faculty: Uday Kumar Reddy B

Video Archive Go Top

 

Series: Department Seminar
Title: Efficient Computation of Happens-Before Relation for Event-Driven Programs

  • Speaker: Ms. Pallavi Maiya
                   PhD Student
  • Faculty Advisor: Prof. Aditya Sunil Kanade
  • Date and Time: Monday, July 03, 2017, 12:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
An emerging style of programming is to use both threads and events to achieve better scalability. The improved scalability comes at the price of increased complexity, as both threads and events can follow non-deterministic schedules. The happens-before (HB) relation captures the space of possible schedules and forms the basis of various concurrency analyses. Improving efficiency of the HB computation can speed up these analyses.

In this paper, we identify a major bottleneck in computation of the HB relation for such event-driven programs. Event-driven programs are designed to interact continuously with their environment, and usually receive a large number of events even within a short span of time. This increases the cost of discovering the HB order among the events. We propose a novel data structure, called event graph, that maintains a subset of the HB relation to efficiently infer order between any pair of events. We present an algorithm, called EventTrack, which improves efficiency of vector clock based HB computation for event-driven programs using event graphs.

We have implemented EventTrack and evaluated it on traces of eight Android applications. Compared to the state-of-the-art technique, EventTrack gave an average speedup of 4.9X. The speedup ranged from 1.8X to 10.3X across the applications.

*This is a practice talk for ISSTA 2017*

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: Automatic Optimization of Geometric Multigrid Methods using a DSL Approach

  • Speaker: Mr. Vinay Vasista
                   M.Sc. (Engg) Student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Prof. Uday Kumar Reddy B
  • Date and Time: Friday, June 30, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The Geometric Multigrid (GMG) method is widely used in numerical analysis to accelerate the convergence of partial differential equations solvers using a hierarchy of grid discretizations. Multiple grid sizes and recursive expression of multigrid cycles make the task of program optimization tedious and error-prone. A high-level language that aids domain experts to quickly express complex algorithms in a compact way using dedicated constructs for multigrid methods and with good optimization and parallelization support is thus valuable.

In this work, we demonstrate how high performance can be achieved along with enhanced programmability for GMG, with new language and optimization support in the PolyMage domain specific language framework. We compare our approach with (a) hand-optimized codes, (b) hand-optimized codes optimized in conjunction with techniques based on the polyhedral compiler framework, and (c) the existing PolyMage optimizer adapted for Multigrid. We use benchmarks varying in Multigrid cycle structure, smoothing steps, and other aspects, in addition to the NAS Multigrid benchmark for evaluation.

On a 24-core Intel Xeon Haswell multicore system, our automatically optimized codes achieve a mean improvement of 3.2x over a straightforward parallelization, and an improvement of 1.31x over the PolyMage DSL optimizer. In most cases, we were also able to match or outperform hand-optimized implementations available. The results demonstrate the effectiveness of the approach on delivering high performance and high programmer productivity.

Speaker Bio:
Vinay Vasista is an M.Sc student in thr Multicore Computing Lab, Department of CSA, Indian Institute of Science. He got his bachelor's degree in Computer Science from Sri Jayachamarajendra College of Engineering, Mysore in 2013. His interests are in High Performance Computing, Parallel programming, Compilers and program optimization. He is currently working as Software Developer at Mentor Graphics Corporation, India.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Static analysis and automated testing of file-processing programs

  • Speaker: M. Raveendra Kumar
  • Faculty Advisor: K. V. Raghavan
  • Date and Time: Thursday, June 29, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Programs that process data residing in files are widely used in domains such as banking, healthcare, networking, and log-file analysis. For these programs, there is a strong need for automated tool support for analysis and transformation tasks such as bug detection, program understanding, and program re-modularization. However, due to the complex idioms used in file-processing programs, precise analysis of these programs as well as automated testing of these programs is a challenging task. These are the problems that we address in this thesis.

Our first contribution is about dataflow analysis of file-processing programs. Our key insight here is that the analysis of a file processing program can be made more precise and useful if knowledge of the input file format of the program is made available to the analysis and incorporated into the transfer functions. We show empirical results for this using standard dataflow problems. We also develop two novel applications of this technique -- the first one to specialize a program to a given "restricted" input file format, and the second one to verify if a program "conforms" to a given input file format. Experiments on a set of benchmark programs showed that our approach provides precise output compared to naive approaches.

For precise dataflow analysis of programs with multiple procedures, *context sensitive* inter-procedural data flow analysis is required. It is very expensive in its full form. As our second contribution, we propose a *prefix* approximation for context-sensitive analysis, wherein a prefix of the full context stack is used to tag dataflow facts. Our technique, which is in contrast with the standard *suffix* approximation technique, is designed to be more scalable on programs that have distinct *application* and *library* layers, while offering comparable or even better precision within the application layer. We analyzed several large enterprise file-processing programs using an implementation of our technique, and compared it with the suffix approximation technique. Our approach's performance was several times better than that of the suffix approximation, while generally not suffering any precision penalty in the application layer of the programs. It is notable that this approach could be equally applicable in the context of non file-processing programs as well.

Our final contribution is regarding automated testing of file-processing programs. *Fuzz testing* is an automated software-testing technique that aims to uncover run-time errors by executing the target program with a large number of inputs generated automatically and systematically. *Grey box* fuzzing is a kind of fuzzing that employs lightweight instrumentation of the program, and observes the behaviors exhibited on a set of test runs. It then uses this information to generate new test inputs that might exercise other behaviors. This process is iterated until as many behaviors as possible get exhibited. AFL is a state-of-the-art fuzzer for file-processing programs. With AFL, if a test input causes a new region of code in the program to be reached, it is considered as a new behavior. In other words, code coverage is the objective. However, a vulnerability at a program point may not get exploited in every visit to the program point; only certain program paths that lead to that point may expose the vulnerability. We extend grey-box fuzzing to take into account a given vulnerability of interest (e.g., buffer overruns), and to try explicitly to traverse paths that are likely to expose such vulnerabilities. We have implemented our approach by enhancing AFL, in the context of the buffer overrun vulnerability. We evaluated our approach on a suite of benchmark programs, and compared it with AFL in its standard form. Our approach could uncover many more number of buffer overflow errors in these benchmarks compared to standard AFL for the same given amount of fuzzing time.

Speaker Bio:
M. Raveendra Kumar is a PhD student at the Department of CSA, IISc. He is also a researcher with TCS Innovation Labs. His interests are in static analysis and dynamic analysis of enterprise applications.

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: Deep learning models for Few-shot and Metric Learning

  • Speaker: Akshay Mehrotra
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Tuesday, June 27, 2017, 12:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Deep neural networks achieve unprecedented performance levels over many tasks in the traditional supervised setting and scale well with large quantities of data. On the other hand improving performance in the low-data regime is understudied and the sheer number of parameters versus small amounts of data makes it a challenging task. The problem of learning from a very small amount is known as few-shot learning. A related problem is that of metric learning over disjoint training and testing classes, where the task is to learn a function that maps a data point to a low-dimensional manifold such that similar points are closer in the low-dimensional space. In this thesis, we propose a couple of deep learning approaches for few-shot learning, and then extend them to the related problem of metric learning over disjoint training and testing classes.

We first argue that a more expressive pairwise similarity matching component is crucial for solving the few-shot learning problem. Towards that end, a network design with a learnable and more expressive similarity objective is proposed by extending the deep residual network. This residual pairwise network approximates a learned metric in the representation space and outperforms previous state-of-the-art on the challenging mini-Imagenet dataset for few-shot learning by getting over 55% accuracy for the 5-way classification task over unseen classes. We also evaluate the generalization behaviour of deep residual networks with a varying number of parameters over classes not observed during training.

Next, since regularization plays a key role in learning with small amounts of data, an additional generator network is proposed by extending the Generative Adversarial Network (GAN) framework for disjoint training and testing classes. This provides a strong regularizer by leveraging the generated data samples. The proposed model can generate plausible variations of exemplars over unseen classes and outperforms strong discriminative baselines with L2 regularization for few shot classification tasks.

Finally, the idea of regularizing by adversarial generation is extended to the metric learning setting over disjoint training and testing classes. The proposed model is an efficient alternative to the hard negative mining scheme necessary when training with triplets in the large margin nearest neighbours setting. The proposed model shows performance and efficiency improvement over models trained without negative-mining or with approximations commonly used for metric learning with triplet loss respectively.

Video Archive Go Top

 

Series: Department Seminar
Title: Provably Secure Computing using Certified Software and Trusted Hardware

  • Speaker: Prof. Sanjit A. Seshia, EECS Dept., UC-Berkeley
  • Date and Time: Tuesday, June 27, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Security-critical applications face a range of threats including exploits in privileged software such as the operating system and hypervisor. In order to protect sensitive data from such privileged adversaries, there is increasing interest in developing and using secure hardware primitives, such as Intel SGX, ARM TrustZone, and Sanctum RISC-V extensions. In this talk, I will discuss the problem of building applications with provable security guarantees by including only these hardware primitives (and very little software) in the trusted computing base. We have developed compiler and verification techniques to develop applications that can be certified (at the level of machine code) to not leak secrets, both via their outputs and certain side channels. Furthermore, I will discuss how formal models of trusted hardware can be leveraged to port programs and their correctness proofs across different hardware platforms.

Speaker Bio:
Sanjit A. Seshia is a Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He received an M.S. and Ph.D. in Computer Science from Carnegie Mellon University, and a B.Tech. in Computer Science and Engineering from IIT-Bombay. His research interests are in dependable computing and computational logic, with a current focus on applying automated formal methods to problems in cyber-physical systems, computer security, electronic design automation, and synthetic biology. His Ph.D. thesis work on the UCLID verifier and decision procedure helped pioneer the area of satisfiability modulo theories (SMT) and SMT-based verification. He is co-author of a widely-used textbook on embedded, cyber-physical systems and has led the development of technologies for cyber-physical systems education based on formal methods. His awards and honors include a Presidential Early Career Award for Scientists and Engineers (PECASE), an Alfred P. Sloan Research Fellowship, the Frederick Emmons Terman Award for contributions to electrical engineering and computer science education, and the School of Computer Science Distinguished Dissertation Award at Carnegie Mellon University.

Host Faculty: Prof. Vinod Ganapathy

Video Archive Go Top

 

Series: Department Seminar
Title: Understanding the cost and the opportunity to enhance programmability of heterogeneous processors

  • Speaker: Dr Arkaprava Basu, AMD Research, Austin TX
  • Date and Time: Monday, June 19, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
As the benefits from transistor scaling diminish, the computing is turning increasingly heterogeneous. In a heterogeneous processor domain-specific accelerators like graphics processing unit (GPU) are coupled with general purpose CPUs for better performance and energy efficiency. However, programming such heterogeneous processors, consisting of multiple disparate computing element, is a challenge. In this talk, we will discuss two emerging features that can address this -- namely, (1) the shared virtual memory, and (2) the ability to invoke operating system services from an accelerator (e.g., GPU). Shared virtual memory or SVM makes it easy to share data across the CPU and accelerators by presenting them with the same view of the memory. Here, we will discuss possible performance pitfall of the SVM implementation in a heterogeneous processor and the opportunities to enhance its future iterations. Next, we will discuss how an accelerator's ability to invoke operating system services like page fault, file access, is crucial to make it a true peer to the CPU. We will then discuss the performance implications of this capability and avenues for betterment and further exploration.

Finally, we will conclude the talk by discussing several future research opportunities to embrace heterogeneity across the hardware and the software stack and across the compute and the memory.

Speaker Bio:
Arkaprava (Arka) is a researcher with AMD Research at Austin,TX. Arka’s research interest broadly lies in the area of computer architecture. Since joining AMD Research in January of 2014, Arka has been exploring various aspects of virtual memory management techniques and heterogeneous processing as part of US Department of Energy’s Exascale computing initiative. Before that he obtained his PhD in Computer Science from the University of Wisconsin-Madison under the supervision of Prof. Mark D. Hill and Prof. Michael M. Swift in December, 2013. Prior to that Arka obtained M.Tech in computer science from IIT Kanpur and B.Tech from Government Engineering College in Kalyani, West Bengal.

Host Faculty: Prof. Matthew Jacob

Video Archive Go Top

 

Series: Department Seminar
Title: Rethinking TLB Designs in Virtualized Environments: A Very Large Part-of-Memory TLB

  • Speaker: Dr. Nagendra Gulur Dwarakanath, Texas Instruments
  • Date and Time: Friday, June 16, 2017, 12:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
With increasing deployment of virtual machines for cloud services and server applications, memory address translation overheads in virtualized environments have received great attention. In the radix-4 type of page tables used in x86 architectures, a TLB miss necessitates up to 24 memory references for one guest to host translation. While dedicated page walk caches and such recent enhancements eliminate many of these memory references, our measurements on the Intel Skylake processors indicate that many programs in virtualized mode of execution still spend hundreds of cycles for translations that do not hit in the TLBs.

This talk presents an innovative scheme to reduce the cost of address translations by using a very large Translation Lookaside Buffer that is part of memory, the POM-TLB. In the POM-TLB, only one access is required instead of up to 24 accesses required in commonly used 2D walks with radix-4 type of page tables. Even if many of the 24 accesses may hit in the page walk caches, the aggregated cost of the many hits plus the overhead of occasional misses from page walk caches still exceeds the cost of one access to the POM-TLB. Since the POM-TLB is part of the memory space, TLB entries (as opposed to multiple page table entries) can be cached in large L2 and L3 data caches, yielding signifiicant benefits. Through detailed evaluation running SPEC, PARSEC and graph workloads, we demonstrate that the proposed POM-TLB improves performance by approximately 10% on average. The improvement is more than 16% for 5 of the benchmarks. It is further seen that a POM-TLB of 16MB size can eliminate nearly all TLB misses in 8-core systems.

Speaker Bio:
Dr. Nagendra Gulur Dwarakanath works for Texas Instruments, in the area of executable specifications for automotive processors and micro controllers. He obtained his M.E. and PhD degrees from IISc. His research interests are primarily in Computer Architecture with specific focus on memory systems, heterogeneous architectures, accelerators, emerging workloads and programming models.

Host Faculty: R. Govindarajan

Video Archive Go Top

 

Series: Department Seminar
Title: Static Deadlock Detection for Asynchronous C# Programs

  • Speaker: Mr. Anirudh Santhiar
                   Ph.D Student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Prof. Aditya Sunil Kanade
  • Date and Time: Wednesday, June 14, 2017, 12:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Asynchronous programming is a standard approach for designing responsive applications. Modern languages such as C# provide async/await primitives for the disciplined use of asynchrony. In spite of this, programs can deadlock because of incorrect use of blocking operations along with non-blocking (asynchronous) operations. While developers are aware of this problem, there is no automated technique to detect deadlocks in asynchronous programs.

We present a novel representation of control flow and scheduling of asynchronous programs, called the continuation scheduling graph, and formulate necessary conditions for a deadlock to occur in a program. We design static analyses to construct continuation scheduling graphs of asynchronous C# programs and to identify deadlocks in them. We have implemented the static analyses in a tool called DeadWait. Using DeadWait, we found 43 previously unknown deadlocks in 11 asynchronous C# libraries. We reported the deadlocks to the library developers. They have confirmed and fixed 40 of them.



*This is a practice talk for PLDI 2017*

Video Archive Go Top

 

Series: Department Seminar
Title: Cumulative prospect theory meets bandits and reinforcement learning

  • Speaker: Dr. Prashanth L.A
                   Department of Computer Science and Engineering
                   Indian Institute of Technology Madras
  • Date and Time: Monday, June 12, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Traditional optimization approaches rely on the central assumption that expected (value) utility appropriately captures preferences, but there is overwhelming evidence in behavioral economics research that indicates that this is not the case for human decision making. Violations of the expected value-based preferences in human-based decision making systems can be addressed by incorporating distortions in the underlying probabilities of the system. We incorporate probabilistic distortions in a stochastic optimization framework, specifically using the well-established cumulative prospect theory (CPT) of Tversky and Kahneman. The stochastic optimization setting presents two particular challenges when CPT is applied: estimating the CPT objective requires estimations of the entire distribution of the underlying random variable, and traditional optimization approaches must deal with biased (albeit asymptotically unbiased) samples of the CPT objective. We propose sample-efficient estimation schemes and use the resulting estimators for optimizing an appropriate CPT objective in both finite discrete optimization (e.g., multi-armed bandit models, ranking & selection) and in settings with continuous decision variables (e.g., policy optimization for Markov decision processes) using gradient-based and global stochastic optimization algorithms. Two transportation applications will be used to test and validate the proposed algorithms in the CPT framework.

Speaker Bio:
Prashanth L.A. is an Assistant Professor in the Department of Computer Science and Engineering at Indian Institute of Technology Madras. Prior to this, he was a postdoctoral researcher at the Institute for Systems Research, University of Maryland - College Park from 2015 to 2017 and at INRIA Lille - Team SequeL from 2012 to 2014. From 2002 to 2009, he was with Texas Instruments (India) Pvt Ltd, Bangalore, India. He received his Masters and Ph.D degrees in Computer Science and Automation from Indian Institute of Science, in 2008 and 2013, respectively. He was awarded the third prize for his Ph.D. dissertation, by the IEEE Intelligent Transportation Systems Society (ITSS). He is the coauthor of a book entitled `Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods', published by Springer in 2013. His research interests are in reinforcement learning, stochastic optimization and multi-armed bandits, with applications in transportation systems, wireless networks and recommendation systems.

Host Faculty: Prof. Shalabh Bhatnagar

Video Archive Go Top

 

 

 

 

 

 

 

 

 

 

Copyright: CSA, IISc 2017      Phone: +91-80-22932368          Fax: +91-80-23602911 Travel Blog    Feedback    Credits