Department of Computer Science and Automation Department of Computer Science and Automation, IISc, Bangalore, India Indian Institute of Science
HOME | ABOUT US | PEOPLE | RESEARCH | ACADEMICS | FACILITIES | EVENTS / SEMINARS | NEWS | CONTACT US

UPCOMING SEMINARS

Series: Ph.D. Colloquium
Title: Deep Learning with Minimal Supervision

  • Speaker: Mr. Gaurav Pandey
                   Ph.D student
                   Dept. of CSA
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Tuesday, September 26, 2017, 12:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
In recent years, deep neural networks have achieved extraordinary performance on supervised learning tasks. Convolutional neural networks (CNN) have vastly improved the state of the art for most computer vision tasks including object recognition and segmentation. However, their success relies on the presence of a large amount of labeled data. In contrast, relatively fewer work has been done in deep learning to handle scenarios when access to ground truth is limited, partial or completely absent. In this thesis, we propose some deep neural network models to handle challenging problems with limited labeled information.

Our first contribution is a neural architecture that allows for the extraction of infinitely many features from an object while allowing for tractable inference. This is achieved by using the `kernel trick', that is, we express the inner product in the infinite dimensional feature space as a kernel. The kernel can either be computed exactly for single layer feedforward networks, or approximated by an iterative algorithm for deep convolutional networks. The corresponding models are referred to as stretched deep networks (SDN). We show that when the amount of training data is limited, SDNs with random weights drastically outperform fully supervised CNNs with similar architectures.

While SDNs perform reasonably well for classification with limited labeled data, they can not utilize unlabeled data which is often much easier to obtain. A common approach to utilize unlabeled data is to couple the classifier with an autoencoder (or its variants) thereby minimizing reconstruction error in addition to the classification error. We discuss the limitations of decoder-based architectures and propose a model that allows for the utilization of unlabeled data without the need of a decoder. This is achieved by jointly modeling the distribution of data and latent features in a manner that explicitly assigns zero probability to unobserved data. The joint probability of the data and the latent features is maximized using a two step EM-like procedure. Depending on the task, we allow the latent features to be one-hot or real-valued vectors and define a suitable prior on the features. For instance, one-hot features correspond to class labels and are directly used for the unsupervised and semi-supervised classification task, whereas real-valued feature vectors are fed as input to simple classifiers for auxiliary supervised discrimination tasks. The proposed model, which we refer to as discriminative encoder (or DisCoder), is flexible in the type of latent features that it can capture. The proposed model achieves state-of-the-art performance on several challenging datasets.

Having addressed the problem of utilizing unlabeled data for classification, we move to a domain where obtaining labels is a lot more expensive, that is, semantic image segmentation. Explicitly labeling each pixel of an image with the object that the pixel belongs to, is an expensive operation, in terms of time as well as effort. Currently, only a few classes of images have been densely (pixel-level) labeled. Even among these classes, only a few images per class have pixel-level supervision. Models that rely on densely-labeled images, can not utilize a much larger set of weakly annotated images available on the web. Moreover, these models can not learn the segmentation masks for new classes, where there is no densely labeled data.

Hence, we propose a model for utilizing weakly-labeled data for semantic segmentation of images. This is achieved by generating fake labels for each image, while simultaneously forcing the output of the CNN to satisfy the mean-field constraints imposed by a conditional random field. We show that one can enforce the CRF constraints by forcing the distribution at each pixel to be close to the distribution of its neighbors. The proposed model is very fast to train and achieves state-of-the-art performance on the popular VOC-2012 dataset for the task of weakly supervised semantic image segmentation.

Video Archive Go Top

 

Series: Department Seminar
Title: SOPER : Discovering the Influence of Fashion and the Many Faces of User from Session Logs using Stick Breaking Process

  • Speaker: Ms. Lucky Dhakad
                   Flipkart
  • Date and Time: Monday, September 25, 2017, 2:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Recommending lifestyle articles is of immediate interest to the e-commerce industry and is beginning to attract research attention. Often followed strategies, such as recommending popular items are inadequate for this vertical because of two reasons. Firstly, users have their own personal preference over items, referred to as personal styles, which lead to the long-tail phenomenon. Secondly, each user displays multiple personas, each persona has a preference over items which could be dictated by a particular occasion, e.g. dressing for a party would be different from dressing to go to office. Recommendation in this vertical is crucially dependent on discovering styles for each of the multiple personas. There is no literature which addresses this problem.

We posit a generative model which describes each user by a Simplex Over PERsona, SOPER, where a persona is described as the individuals preferences over prevailing styles modeled as topics over items. The choice of simplex and the long-tail nature necessitates the use of stick-breaking process. The main technical contribution is an efficient collapsed Gibbs sampling based algorithm for solving the attendant inference problem.

Trained on large-scale interaction logs spanning more than half-a-million sessions collected from an e-commerce portal, SOPER out-performs previous baselines by a large margin of 35% in identifying persona. Consequently it outperforms several competitive baselines comprehensively on the task of recommending from a catalogue of roughly 150 thousand lifestyle articles, by improving the recommendation quality as measured by AUC by a staggering 12.23%, in addition to aiding the interpretability of uncovered personal and fashionable styles thus advancing our precise understanding of the underlying phenomena.

Speaker Bio:
Lucky Dhakad completed her ME from CSA, IISc in 2016 and is presently working with Flipkart. *This is a practice talk for CIKM 2017*

Host Faculty: Prof. Chiranjib Bhattacharyya

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: A Study of Thompson Sampling Approach for Sleeping Multi-Armed Bandit Problems

  • Speaker: Mr. Aritra Chatterjee
                   M.Sc. (Engg) Student
                   Dept of CSA
                   IISc
  • Faculty Advisor: Prof. Y. Narahari
  • Date and Time: Monday, September 25, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The Multi-armed bandit (MAB) problem provides a convenient abstraction for many online learning problems arising in modern applications including Internet advertising, crowdsourcing, online procurement, smart grids, etc. Several variants of the MAB problem have been proposed to extend the basic model to a variety of practical and general settings. The sleeping multi-armed (S-MAB) problem is one such variant where the subset of available arms varies with time. This study is focused on analyzing the efficacy of the Thompson Sampling algorithm for solving the S-MAB problem.

Any algorithm for the classical MAB problem is expected to choose one of K available arms (actions) in each of T consecutive rounds. Each choice of an arm generates a stochastic reward from an unknown but fixed distribution. The goal of the algorithm is to maximize the expected sum of rewards over the T rounds (or equivalently minimize the expected total regret), relative to the best-fixed action in hindsight. In many of these settings, however, not all arms may be available in any given round. For example, in Internet advertising, some advertisers might choose to stay away from the auction due to budget constraints; in crowdsourcing, some workers may not be available at a given time due to timezone difference, etc. This implies that the algorithm needs to work only with the set of available arms. Further, unlike in the classical setting, the performance of an algorithm can no longer be evaluated relative to the best-fixed arm in hindsight; the regret is now to be measured relative to the best "available" arm in hindsight. One possible way is to compute the overall regret as the worst-case regret over all possible sample paths of availabilities (that is, under adversarial selection of available arms).

In the literature, upper confidence bound (UCB)-based approaches have been proposed and investigated for the S-MAB problem. Our contribution is to investigate a Thompson sampling-based approach. Our key finding is to establish a logarithmic regret bound, which non-trivially generalizes a similar bound known for this approach in the classical MAB setting. Our bound also matches (up to constants) the best-known lower bound for the S-MAB problem. And, we show via detailed simulations, that the Thompson Sampling approach, in fact, outperforms the known algorithms for the S-MAB problem.

Video Archive Go Top

 

PAST SEMINARS

Series: Department Seminar
Title: Maliciously Secure Oblivious Linear Function Evaluation with Constant Overhead

  • Speaker: Mr. Satrajit Ghosh
                   PhD Student
                   Cryptography and Security Group
                   Aarhus University
                   Denmark
  • Date and Time: Thursday, September 21, 2017, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
In this work we consider the problem of oblivious linear function evaluation (OLE). OLE is a special case of oblivious polynomial evaluation (OPE) and deals with the oblivious evaluation of a linear function $f(x)=ax+b$. This problem is non-trivial in the sense that the sender chooses $a,b$ and the receiver $x$, but the receiver may only learn $f(x)$. We present a highly efficient and UC-secure construction of OLE in the OT-hybrid model that requires only $O(1)$ OTs per OLE. The construction is based on noisy encodings introduced by Naor and Pinkas (STOC'99) and used for passive secure OLE's by Ishai, Prabhakaran and Sahai (TCC'09). A result asymptotically similar to ours is known by applying the IPS compiler to the mentioned passive secure OLE protocol, but our protocol provides better constants and would be considerably simpler to implement. Concretely we use only $16$ OTs to generate one active secure OLE, and our protocol achieves active security by adding fairly simple checks to the passive secure protocol. We therefore believe our protocol takes an important step towards basing practical active-secure arithmetic computations on OLEs. Our result requires novel techniques that might be of independent interest. As an application we present the currently most efficient OPE construction.

Speaker Bio:
Satrajit Ghosh is doing his PhD at Cryptography and Security Group, Aarhus University, Denmark under supervision of Jesper Buus Nielsen and Claudio Orlandi. He did his Masters from the Computer Science and Engineering Department, IIT Kharagpur under the guidance of Dr. Abhijit Das. Prior to that, he did his B.Tech in Computer Science and Engineering and B.Sc (Honors) in Physics from University of Calcutta. Before joining Aarhus University, he also worked as researcher at the R. C. Bose Centre of Cryptology and Security, Indian Statistical Institute, Kolkata and in Saarland University, Germany.

Host Faculty: Dr. Arpita Patra

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: Improved lower bounds for depth four arithmetic circuits.

  • Speaker: Mr. Abhijat Sharma, M.Sc. (Engg)
  • Faculty Advisor: Dr . Chandan Saha
  • Date and Time: Wednesday, September 20, 2017, 10:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We study the problem of proving lower bounds for depth four arithmetic circuits. Depth four circuits have been receiving much attraction when it comes to recent circuit lower bound results, as a result of the series of results culminating in a proof that strong enough lower bounds for depth four circuits will imply super-polynomial lower bounds for general arithmetic circuits, and hence solve one of the most central open problems in algebraic complexity i.e a separation between the VP and VNP complexity classes.

However despite several efforts, the best known lower bounds for general circuits remains Omega(N logN), where N is the number of input variables. In the case of arithmetic formulas, an O(N^2) lower bound is known, which has not seen any improvement since then. The situation for depth three arithmetic circuits was also similar for many years, until a recent result that achieved an almost cubic lower bound improving upon the previous best quadratic lower bound. However, unlike depth three circuits, proving a super-quadratic lower bound for depth four circuits remains a prevalent open problem for many years.

Previous results implied a super-linear lower bound of Omega(N^{1.33}) for depth four circuits. As the main contribution of this thesis, we improve upon this super-linear bound and prove an Omega(N^(1.5)) lower bound on the size of depth four circuits, computing an explicit N-variate polynomial in VNP with degree d = O(sqrt{N}). Our approach offers a potential route to proving a super-quadratic lower bound for depth four circuits. Taking cue from the numerous successful results recently, we use the technique of the shifted partial derivatives measures to achieve the said lower bound. Particularly, we use the Dimension of Projected Shifted Partials (DPSP) measure. Coming to the choice of the hard polynomial, we again follow the status quo and use a variant of the Nisan-Wigderson (NW) polynomial family that has proved to be very helpful over the past few years in arithmetic circuit complexity.

Finally, we do a careful analysis of two previous work on general depth four circuits and compare their techniques to ours. We conclude that our result can potentially be used as a starting point, and techniques similar to that of the almost cubic lower bound result for depth three circuits can likely be used to strengthen our lower bound to Omega(N^(2.5)), for general depth four arithmetic circuits.

Video Archive Go Top

 

Series: Department Seminar
Title: SymSum: Symmetric-Sum Distinguishers Against Round Reduced SHA3

  • Speaker: Dr. Dhiman Saha
  • Date and Time: Friday, September 15, 2017, 12:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Cryptographic Hash Functions are ubiquitous in our day-to-day digital lives. Primarily they ensure data integrity which is one of the fundamental crypto goals making the analysis of these hash functions imperative. After briefly introducing the research area, this talk will focus on devising “Distinguishers” of the latest cryptographic hash function – SHA3. “Distinguishers” constitute a specific paradigm of analyzing hash functions that deals with exhibiting non-random behavior. The talk will zoom-in to the domain of higher-order derivatives of Boolean functions and its applications to analysis of SHA3 in the form of the Zero-Sum property. Finally, our latest research result which proposes a new class of Distinguishers of SHA3 will be presented. The technique presented relies on a variant of classical higher order Boolean derivatives over special subspaces. It exploits internal symmetry of the hash function to come up with the best distinguisher on SHA3 penetrating up to 9 rounds that succeeds with probability one and outperforms the closest competitor, in terms of effort, by a factor of 4.

Speaker Bio:
Dhiman Saha is a Research Associate in the Crypto Research Lab, in the Department of Computer Science and Engineering at IIT Kharagpur. He recently received his PhD from the department with his broad area of research encompassing both theoretical and physical aspects of Cryptography. As a part of his thesis he worked on the cryptanalysis of hash functions and authenticated ciphers. He has broken three authenticated encryption schemes submitted to the on-going CAESAR competition and has been involved in the analysis of latest cryptographic hash algorithm - SHA3. Before joining for his PhD he worked in the Electronic Design Automation industry for over two years. He completed his MS degree from IIT Kharagpur in 2009 with a thesis on Hardware Security. He received his BE from NIT Agartala in 2006 and was awarded the Gold Medal for Computer Science and Engineering. He was also recipient of the NEC Scholarship during his BE. Along with hash functions and authenticated ciphers, his recent research interests include memory-hard functions and proofs-of-work.

Host Faculty: Dr. Sanjit Chatterjee

Video Archive Go Top

 

Series: Department Seminar
Title: Provably Efficient Scheduling of Cache-Oblivious Wavefront Algorithms

  • Speaker: Pramod Ganapathi
  • Date and Time: Tuesday, September 12, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Iterative wavefront algorithms for evaluating dynamic programming recurrences exploit optimal parallelism but show poor cache performance. Tiled-iterative wavefront algorithms achieve optimal cache complexity and high parallelism but are cache-aware and hence are not portable and not cache-adaptive. On the other hand, standard cache-oblivious recursive divide-and-conquer algorithms have optimal serial cache complexity but oft‰en have low parallelism due to arti€ficial dependencies among subtasks. We present a generic algorithmic framework to systematically transform standard cache-oblivious recursive divide-and-conquer algorithms into recursive wavefront algorithms to achieve optimal parallel cache complexity and high parallelism under state-of-the-art schedulers for fork-join programs. The recursive wavefront algorithms are efficient both in theory and in practice.

Speaker Bio:
Pramod Ganapathi received his PhD in Computer Science, specializing in algorithms, from Stony Brook University, New York, USA in 2016. He was supervised by Prof. Rezaul Chowdhury. During his PhD, he designed algorithms (and frameworks) that can automatically (and semi-automatically) discover efficient recursive divide-and-conquer algorithms for a wide class of dynamic programming problems. Currently, he is the founder and CEO of a startup called Learning is Beautiful, based in Bengaluru. The company aims to teach hardcore problem-solving and deep thinking on higher education topics through simple and intuitive animated videos.

Host Faculty: Uday Kumar Reddy B

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: Technology Disruptions and Innovation at Intersections

  • Speaker: Mr. K Ananth Krishnan
                   Chief Technology Officer
                   Tata Consultancy Services (TCS)
  • Date and Time: Friday, September 08, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
What is the view from the CTO of a company at the intersection of technology and the businesses of the 1000 biggest corporations globally across 15 industries? Is the global industry hungry for research breakthroughs? What are the big disruptions across industries, and how are these triggered? How are industries responding, and which companies will succeed? Which areas of research are most in demand, and are moving at great speed from idea to impact?

How is TCS Research and Innovation responding to this market? Ananth directs TCS’ Research and Innovation and will highlight the current scenario and provide real world examples from around the world. He will also explain why the sciences are intersecting with computing, to drive new ideas and creating value.

Speaker Bio:
As CTO, Ananth directs research and innovation in Tata Consultancy Services. Ananth has architected a 4E model to make invention, innovation and co-innovation at TCS deliver value to TCS business and its customers. Under his leadership the TCS research community has created a significant portfolio of patents, papers and IP. Ananth has been a member of the TCS Corporate Think-Tank since 1999, and has led several strategic initiatives. He has been a Principal Architect and Lead Consultant in the Architecture and Technology Consulting Practice, and earlier the head of the TCS Systems Management and the Systems Software Group. Ananth was elected a Fellow at the Indian Academy of Engineering (INAE) in recognition of his contributions towards engineering in 2013. He is a Senior Member of the IEEE and a member of the Computer Society of India, and is an invited faculty in the Department of Management Studies at the Indian Institute of Technology, Madras. Named a Distinguished Alumnus of the Indian Institute of Technology, Delhi in 2009, Ananth has been listed in Computerworld’s Premier 100 IT Leaders (2007). He has also been chosen as one of Infoworld’s Top 25 CTOs (2007). Ananth is an M. Tech. in Computer Science from the Indian Institute of Technology, Delhi. He also has a Masters degree in Physics from the same Institute and a Bachelor's degree in Physics from Fergusson College, Pune.

Host Faculty: Prof. Y Narahari

Video Archive Go Top

 

Series: Department Seminar
Title: Intercepting Functions for Memoization

  • Speaker: Arjun Suresh
  • Date and Time: Friday, September 01, 2017, 3:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
In this talk, we present mechanisms to implement function memoization at a software level. We analyze the potential of function memoization on applications and its performance gain on current architectures. We propose three schemes – a simple load-time approach that works for any dynamically linked function, a compile-time approach using LLVM framework which can enable memoization for any program function, and finally a hardware proposal for performing memoization in hardware and its potential benefits. Demonstration of the link-time approach with transcendental functions showed that memoization is applicable and gives good benefit even under modern architectures and compilers (with the restriction that it can be applied only for dynamically linked functions). Our compile-time approach extends the scope of memoization and also increases the benefit due to memoization. This works for both user-defined functions as well as library functions. It can handle certain kind of non pure functions like those functions with pointer arguments and global variable usage. Hardware memoization lowers the threshold for a function to be memoized and gives more performance gain on average.

Speaker Bio:
Arjun Suresh is currently working on certain extensions of GATE Overflow, a complete exam preparation package for GATE preparation for the Computer Science stream. Before this, he did his post-doctoral research at The Ohio State University on high-performance GPU implementations of tensor algebra operations. His doctoral studies were completed under the guidance of Dr. Erven Rohou in the ALF team at INRIA, Rennes, and his Ph.D. thesis was titled "Intercepting Functions for Memoization". Prior to that, he had obtained his Masters in Engineering from the Dept of CSA at the Indian Institute of Science under the guidance of Prof. Uday Kumar Reddy B, and his Bachelors' from the College of Engineering, Trivandrum.

Host Faculty: Uday Kumar Reddy B

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: Unleashing the Potential of Heterogeneous Computing for Mobile and IoT Devices

  • Speaker: Tulika Mitra
                   Professor of Computer Science
                   School of Computing
                   National University of Singapore
  • Date and Time: Friday, September 01, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Transistor count continues to increase for silicon devices following Moore’s Law. But the failure of Dennard scaling has brought the computing community to a crossroad where power/thermal has become the major limiting factor, specially in mobile and IoT devices with constrained form factors. Thus future chips in these devices can have many cores; but only a fraction of them can be switched on at any point in time. This dark silicon era is driving the emergence of heterogeneous multi-core architectures consisting of cores with diverse computational capabilities and power-performance characteristics enabling better match between the application requirements and the compute engine leading to substantially improved energy-efficiency. In this talk, I will introduce the technology trends driving heterogeneous multi-cores, provide an overview of computationally divergent and performance heterogeneous multi-cores, and present my research group's effort in architecture, compiler, and runtime support to fully realize the potential of heterogeneity towards energy-efficient computing.

Speaker Bio:
Tulika Mitra is a Professor of Computer Science at School of Computing, National University of Singapore (NUS). She received her PhD from the State University of New York at Stony Brook (2000) and M.E. from the Indian Institute of Science (1997). Her research interests span various aspects of the design automation of embedded real-time systems, cyber-physical systems (CPS), and Internet of Things (IoT) with particular emphasis on energy-efficient computing, heterogeneous computing, application-specific processors, and software timing analysis/optimizations. She has authored over hundred scientific publications in leading international journals and conferences in this domain. Her research has been recognized by best paper award at FPT 2012 and best paper nominations at DAC (2016, 2012, 2009), DATE, VLSI Design, CODES+ISSS, FPL, ECRTS, CASES. Prof. Mitra serves as Senior Associate Editor of the ACM Transactions on Embedded Computing Systems, Deputy Editor-in-Chief of IEEE Embedded Systems Letters, Associate Editor of IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, IEEE Design & Test Magazine, EURASIP Journal on Embedded Systems, and IET Computers & Digital Techniques. She has served in the organizing and program committees of several major conferences in embedded systems, real-time systems, and electronic design automation.

Host Faculty: R. Govindarajan

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: Fast Actively Secure OT Extension for Short Secrets

  • Speaker: Mr. Ajith S
                   M.Sc. (Engg) Student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Dr. Arpita Patra
  • Date and Time: Thursday, August 31, 2017, 4:30 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Oblivious Transfer (OT) is one of the most fundamental cryptographic primitives with wide-spread application in general secure multi-party computation (MPC) as well as in a number of tailored and special-purpose problems of interest such as private set intersection (PSI), private information retrieval (PIR), contract signing to name a few. Often the instantiations of OT require prohibitive communication and computation complexity. OT extension protocols are introduced to compute a very large number of OTs referred as extended OTs at the cost of a small number of OTs referred as seed OTs.

We present a fast OT extension protocol for small secrets in the active setting. Our protocol when used to produce 1-out-of-n OTs outperforms all the known actively secure OT extensions. Our protocol is built on the semi-honest secure extension protocol of Kolesnikov and Kumaresan of CRYPTO’13 (referred as KK13 protocol henceforth) which is the best-known OT extension for short secrets. At the heart of our protocol lies an efficient consistency checking mechanism that relies on the linearity of Walsh- Hadamard (WH) codes. Asymptotically, our protocol adds a communication overhead of O(µ log κ) bits over KK13 protocol irrespective of the number of extended OTs, where κ and µ refer to computational and statistical security parameter respectively. Concretely, our protocol when used to generate a large enough number of OTs adds only 0.011-0.028% communication overhead and 4-6% runtime overhead both in LAN and WAN over KK13 extension. The runtime overheads drop below 2% when in addition the number of inputs of the sender in the extended OTs is large enough.

As an application of our proposed extension protocol, we show that it can be used to obtain the most efficient PSI protocol secure against a malicious receiver and a semi-honest sender.

Video Archive Go Top

 

Series: Department Seminar
Title: Approximating Geometric Knapsack via L-packings

  • Speaker: Dr. Arindam Khan
                   Researcher, IDSIA SUPSI
                   Switzerland
  • Date and Time: Monday, August 21, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We study the two-dimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem, we are given a set of axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well-studied optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is $2+eps$ [Jansen and Zhang, SODA 2004].

After more than a decade, in this paper we break the 2-approximation barrier, achieving a polynomial-time $17/9+eps<1.89$ approximation, which improves to $558/325+eps<1.72$ in the cardinality case. We also consider the variant of the problem with rotations (2DKR) , where the items can be rotated by $90$ degrees. Also in this case the best-known polynomial-time approximation factor (even for the cardinality case) is $2+eps$ [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time $3/2+eps$-approximation for 2DKR, which improves to $4/3+eps$ in the cardinality case.

This is a joint work with Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala and Andreas Wiese. This result will appear in FOCS 2017.

Speaker Bio:
Arindam Khan is a researcher in IDSIA, SUPSI in Lugano, Switzerland. His research areas include approximation algorithms, online algorithms and computational geometry. He has obtained his PhD in Algorithms, Combinatorics and Optimization (ACO) from Georgia Institute of Technology, Atlanta, USA under Prof. Prasad Tetali. Previously he has been a research intern in Theory group, Microsoft Research Redmond and Microsoft Research Silicon Valley USA, a visiting researcher at Simons Institute, Berkeley, USA and a blue scholar in IBM Research India.

Host Faculty: Dr. Anand Louis

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Number Theoretic, Computational and Cryptographic Aspects of a Certain Sequence of Arithmetic Progressions

  • Speaker: Mr. Cherukapally Srikanth
                   Ph.D student
                   Dept. of CSA
  • Faculty Advisor: Prof. C.E. Veni Madhavan & Dr. Sanjit Chatterjee
  • Date and Time: Wednesday, August 09, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
This thesis introduces a new mathematical object: collection of arithmetic progressions with elements satisfying the inverse property,

``j-th terms of i-th and (i+1)-th progressions are multiplicative inverses of each other modulo (j+1)-th term of i-th progression''. Such a collection is uniquely defined for any pair $(a,d)$ of co-prime integers. The progressions of the collection are ordered. Thus we call it a sequence rather than a collection. The results of the thesis are on the following number theoretic, computational and cryptographic aspects of the defined sequence and its generalizations.

The sequence is closely connected to the classical Euclidean algorithm. Precisely, certain consecutive progressions of the sequence form ``groupings''. The difference between the common differences of any two consecutive progressions of a grouping is same. The number of progressions in a grouping is connected to the quotient sequence of the Euclidean algorithm on co-prime input pairs. The research community has studied extensively the behavior of the Euclidean algorithm. For the first time in the literature, the connection (proven in the thesis) shows what the quotients of the algorithm signify. Further, the leading terms of progressions within groupings satisfy a mirror image symmetry property, called ``symmetricity''. The property is subject to the quotient sequence of the Euclidean algorithm and divisors of integers of the form $x^2-y^2$ falling in specific intervals. The integers $a$, $d$ are the primary quantities of the defined sequence in a computational sense. Given the two, leading term and common difference of any progression of the sequence can be computed in time quadratic in the binary length of $d$. On the other hand, the inverse computational question of finding $(a,d)$, given information on some terms of the sequence, is interesting. This problem turns out to be hard as it requires finding solutions to an nearly-determined system of multivariate polynomial equations. Two sub-problems arising in this context are shown to be equivalent to the problem of factoring integers. The reduction to the factoring problem, in both cases, is probabilistic.

Utilizing the computational difficulty of solving the inverse problem, and the sub-problems (mentioned above), we propose a symmetric-key cryptographic scheme (SKCS), and a public key cryptographic scheme (PKCS). The PKCS is also based on the hardness of the problem of finding square-roots modulo composite integers. Our proposal uses the same algorithmic and computational primitives for effecting both the PKCS and SKCS. In addition, we use the notion of the sequence of arithmetic progressions to design an entity authentication scheme. The proof of equivalence between one of the inverse computational problems (mentioned above) and integer factoring led us to formulate and investigate an independent problem concerning the largest divisor of integer $N$ bounded by the square-root of $N$. We present some algorithmic and combinatorial results. In the course of the above investigations, we are led to certain open questions of number theoretic, combinatorial and algorithmic nature. These pertain to the quotient sequence of the Euclidean algorithm, divisors of integers of the form $x^2-y^2$ in specific intervals, and the largest divisor of integer $N$ bounded by its square-root.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Approximation Algorithms for Geometric Packing and Covering Problems

  • Speaker: Mr. Aniket Basu Roy
                   PhD student at CSA department
  • Faculty Advisor: Prof. Sathish Govindarajan
  • Date and Time: Monday, August 07, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We study a host of Geometric optimization problems that are NP-hard and design polynomial time approximation algorithms for them. More precisely, we are given a family of geometric objects and a point set, mostly in the plane, and study different variants and generalizations of Packing and Covering problems. Our objects of study are mostly family of non-piercing regions in the plane. We call a set of simple and connected regions to be non-piercing if for any pair of intersecting regions A and B their set difference AB is a connected region e.g., disks, half-planes etc. Whereas, lines and rectangles are examples of piercing objects. For most of the problems we have studied, a simple local search algorithm is enough to yield a PTAS whose analysis of approximation requires a suitable bipartite graph on the local search solution and the optimal solution to have balanced sub-linear separators.

We study a generalization of the standard packing problem, called the capacitated packing problem and its slight variant, the shallow packing problem. We devise a PTAS for both these problems for constant capacities. For the former problem, the objects are non-piercing whereas the objects can be even more general and only have sub-quadratic monotonic union complexity for the latter problem. The non-triviality here is to show that the intersection graph of arrangements with shallow depth, which is not planar, has balanced sub-linear separators. Our results complement the Maximum Independent Set of Rectangles problem as rectangles are both piercing and have quadratic union complexity. We also study the capacitated packing problem for the dual set system and are able to show that local search works here as well for unit capacity and devise a constant factor approximation algorithm using Bronnimann-Goodrich technique which is inspired by the method of multiplicative weight updates.

Runaway Rectangle Escape problem is closely related to the above packing problems and is motivated from routing in printed circuit boards. Here we are given a set of axis-parallel rectangles inside a rectangular boundary and a maximum allowed depth d. The objective is to extend the maximum number of input rectangles to one of the four directions such that the maximum depth of a point is at most d after extension. We devise several approximation algorithms and a hardness result.

We study different variants of the covering problem like the Unique Coverage, Multi-Covering, Prize Collecting Set Cover, Exact Cover, and Multi-Hitting Set. For the first three problems, we show that local search yields a PTAS for non-piercing regions under some assumptions. For the Exact Cover problem, we prove that even finding a feasible solution is NP-hard for objects as special as unit squares. For the Multi-Hitting Set problem, we propose constant factor approximation algorithm for non-piercing regions using the fact that they have shallow dual-cell complexity.

Lastly, we consider variants of the Art Gallery problems called the Minimum (Horizontal) Sliding Cameras problem, M(H)SC. We are given an orthogonal polygon and we need to deploy mobile guards who can walk along an orthogonal (horizontal) line segment and can guard a point inside the polygon if the perpendicular drawn from the point onto the line segment lies inside the polygon. Our local search algorithm yields a PTAS for the MHSC problem and also the MSC problem when the polygon has no holes. Here again, the nontriviality is to prove that an appropriate graph on orthogonal line segments is planar. We also observe that the MSC problem for polygons with holes is APX-hard.

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: MULTI-OBJECTIVE NAVIGATION IN UNCERTAINTY

  • Speaker: Professor Krishna R. Pattipati
                   University of Connecticut
                   Storrs, Connecticut
                   USA
  • Date and Time: Thursday, August 03, 2017, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Routing in uncertain environments involves many contextual elements, such as different environmental conditions (ensemble forecasts with varying spatial and temporal uncertainty), multiple objectives, changes in mission goals while en route (e.g., training requirements, pop-up threats, humanitarian aid), and asset status. In this walk, we focus on robust planning under uncertainty by exploiting weather forecast ensembles and realizations using TMPLAR, a Tool for Multi-objective PLanning and Asset Routing, in the context of 3-D and 4-D path navigation. Our approach includes solving for m-best shortest paths for each weather forecast realization via Murty's search space partitioning strategy and evaluating the mean, variance, and signal-to-noise ratio (SNR) of the paths over all weather possibilities. The robust path is one that, for example, minimizes the root mean square (RMS) value or one that maximizes the SNR, given the possible forecast realizations. In a complementary effort to compact the feature-rich multi-dimensional navigational problem space, we develop a self-organizing map (SOM)-based data reduction scheme that filters complex contexts and highlights only the key impacting features within a given information space. We demonstrate the utility of our approaches via application to a real-world shipping tragedy, using the weather forecast realizations available prior to the event.

Speaker Bio:
Krishna R. Pattipati received the B. Tech. degree in electrical engineering with highest honours from the Indian Institute of Technology, Kharagpur, in 1975, and the M.S. and Ph.D. degrees in control and communication systems from UConn, Storrs, in 1977 and 1980, respectively. He was with ALPHATECH, Inc., Burlington, MA from 1980 to 1986. He has been with the Department of Electrical and Computer Engineering at UConn since 1986, where he is currently the Board of Trustees Distinguished Professor and the UTC Chair Professor in Systems Engineering. Dr. Pattipati’s research activities are in the areas of proactive decision support, uncertainty quantification, smart manufacturing, autonomy, knowledge representation, and optimization-based learning and inference. A common theme among these applications is that they are characterized by a great deal of uncertainty, complexity, and computational intractability. He is a cofounder of Qualtech Systems, Inc., a firm specializing in advanced integrated diagnostics software tools (TEAMS, TEAMS-RT, TEAMS-RDS, TEAMATE, PackNGo), and serves on the board of Aptima, Inc. Dr. Pattipati was selected by the IEEE Systems, Man, and Cybernetics (SMC) Society as the Outstanding Young Engineer of 1984, and received the Centennial Key to the Future award. He has served as the Editor-in-Chief of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS--PART B from 1998 to 2001. He was co-recipient of the Andrew P. Sage Award for the Best SMC Transactions Paper for 1999, the Barry Carlton Award for the Best AES Transactions Paper for 2000, the 2002 and 2008 NASA Space Act Awards for "A Comprehensive Toolset for Model-based Health Monitoring and Diagnosis," and “Real-time Update of Fault-Test Dependencies of Dynamic Systems: A Comprehensive Toolset for Model-Based Health Monitoring and Diagnostics”, the 2003 AAUP Research Excellence Award and the 2005 School of Engineering Teaching Excellence Award at the University of Connecticut at UCONN. He is an elected Fellow of IEEE for his contributions to discrete-optimization algorithms for large-scale systems and team decision-making, and is a member of the Connecticut Academy of Science and Engineering.

Host Faculty: Prof. Y Narahari

Video Archive Go Top

 

Series: Department Seminar
Title: Data Driven Precondition Inference for Compiler Optimizations

  • Speaker: Dr. Santosh Nagarakatte
                   Assistant Professor
                   Department of Computer Science, Rutgers University
  • Date and Time: Thursday, August 03, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Peephole optimizations are a common source of compiler bugs. Compiler developers typically transform an incorrect peephole optimization into a valid one by strengthening the precondition. This process is challenging and tedious. In this talk, I will describe our new data-driven approach for inferring preconditions for peephole optimizations expressed in Alive. Our main idea is to generate positive and negative examples for an optimization in a context without tests, enumerate predicates on demand, and learn a set of predicates that separate the positive and negative examples. We repeat this process until we are able to find a precondition that ensures the validity of the optimization. Our prototype, ALIVE-INFER, reports both a weakest precondition and a set of succinct partial preconditions to the developer. Our prototype generates preconditions that are weaker than LLVM’s preconditions for majority of the optimizations in the Alive suite. The talk will also demonstrate the applicability of this technique to generalize optimization patterns generated by Souper, an LLVM IR–based super-optimizer.

Speaker Bio:
Santosh Nagarakatte is an Assistant Professor of Computer Science at Rutgers University. He obtained his PhD from the University of Pennsylvania in 2012. His research interests are in Hardware-Software Interfaces spanning Programming Languages, Compilers, Software Engineering, and Computer Architecture. His papers have been selected as IEEE MICRO TOP Picks papers of computer architecture conferences in 2010 and 2013. He has received the NSF CAREER Award in 2015, ACM SIGPLAN PLDI 2015 Distinguished Paper Award, and ACM SIGSOFT ICSE 2016 Distinguished Paper Award for his research on LLVM compiler verification. His papers have been selected as the 2016 SIGPLAN Research Highlights Paper and 2017 Communication of the ACM Research Highlights Paper.

Host Faculty: Prof. Vinod Ganapathy

Video Archive Go Top

 

Series: Department Seminar
Title: The program induction route to explainable AI

  • Speaker: Prof. Subramanian Ramamoorthy
  • Date and Time: Wednesday, August 02, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The confluence of advances in diverse areas including machine learning, large scale computing and reliable commoditized hardware have brought autonomous robots to the point where they are poised to be genuinely a part of our daily lives. Some of the application areas where this seems most imminent, e.g., autonomous vehicles, also bring with them stringent requirements regarding safety, explainability and trustworthiness. These needs seem to be at odds with the ways in which recent successes have been achieved, e.g., with end-to-end learning. In this talk, I will try to make a case for an approach to bridging this gap, through the use of programmatic representations that intermediate between opaque but efficient learning methods and other techniques for reasoning that benefit from ‘symbolic’ representations.

I will start by attempting to frame the overall problem, drawing on some of the motivations of the DARPA Explainable AI program (under the auspices of which we are starting a new project, COGLE) and on extant ideas regarding safety and dynamical properties in the control theorists’ toolbox.

Our first result will be based on a framework for Grounding and Learning Instances through Demonstration and Eye tracking (GLIDE). The problem here is to learn the mapping between abstract plan symbols and their physical instances in the environment, i.e., physical symbol grounding, starting from cross-modal input provides the combination of high- level task descriptions (e.g., from a natural language instruction) and a detailed video or joint angles signal. This problem is formulated in terms of a probabilistic generative model and addressed using an algorithm for computationally feasible inference to associate traces of task demonstration to a sequence of fixations which we call fixation programs.

The second related result more directly addresses the issue of inferring processes that underlie observed black-box phenomena, in the form of causal mechanisms. We propose to learn high-level programs in order to represent abstract models, which capture the invariant structure in the observed data. We introduce the pi-machine (program-induction machine) – an architecture able to induce interpretable LISP-like programs from observed data traces. We propose an optimization procedure for program learning based on back propagation, gradient descent and A* search. We apply the proposed method to problems including system identification, explaining the behavior of a DQN agent and explaining human demonstrations in planned activities. Our results show that the pi-machine can efficiently induce interpretable programs from individual data traces.

Speaker Bio:
Prof. Subramanian Ramamoorthy is a Reader (Associate Professor) in the School of Informatics, University of Edinburgh, where he has been on the faculty since 2007. He is a Coordinator of the EPSRC Robotarium Research Facility, Executive Committee Member for the Edinburgh Centre for Robotics and at the Bayes Centre. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh, and has been a Visiting Professor at Stanford University and the University of Rome “La Sapienza”. His research focus is on robot learning and decision-making under uncertainty, addressed through a combination machine learning techniques with emphasis on issues of transfer, online and reinforcement learning as well as new representations and analysis techniques based on geometric/topological abstractions.

Host Faculty: Aditya Kanade

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Geometric and Topological Methods for Biomolecular Visualization

  • Speaker: Mr. Talha Bin Masood
                   PhD student at CSA department
  • Faculty Advisor: Prof. Vijay Natarajan
  • Date and Time: Monday, July 31, 2017, 11:00 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
In recent years, techniques from the field of scientific visualization and computational geometry have increasingly found application in the study of biomolecular structure. In this thesis, we focus on the problems related to identification and visualization of cavities and channels in proteins. In the first part of the thesis, we describe two methods: one for extraction and visualization of biomolecular channels, and the other for extraction of cavities in uncertain data. We also describe the two software tools based on the proposed methods, called ChExVis and RobustCavities. In the second part of the thesis, we describe efficient GPU based parallel algorithms for two geometric structures widely used in the study of biomolecules. One of the structures we discuss is discrete Voronoi diagram which finds applications in channel visualization, while the other structure is alpha complex which is extremely useful in studying geometric and topological properties of biomolecules.

In this talk, we will focus on the software tool called ChExVis designed for extraction and visualization of channels in biomolecules. We will describe the alpha complex based framework for extraction of geometrically feasible channels. In addition, we will present novel ways of visualizing the amino-acids lining the channels together with their physico-chemical properties. We will discuss a few case studies that demonstrate the utility of the proposed method. Secondly, we will describe a GPU based algorithm for the computation of the alpha complex, a subcomplex of the Delaunay triangulation. The algorithm exploits the knowledge of typical distribution and sizes of atoms in biomolecules to achieve speed-up of up to 13x over the state-of-the-art algorithm.

Speaker Bio:
Talha is a Ph.D. candidate in Computer Science at the Dept. of Computer Science and Automation, Indian Institute of Science, Bangalore. He received B.Tech. degree from Aligarh Muslim University, and M.E. degree from Indian Institute of Science, both in Computer Science and Engineering. His research interests include Scientific visualization, Computational geometry, Computational topology, and their applications to structural biology.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Program analyses to support memory-saving refactorings in Java programs

  • Speaker: Girish Maskeri Rama
  • Faculty Advisor: K. V. Raghavan
  • Date and Time: Friday, July 28, 2017, 11:30 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Software commonly consumes unexpectedly high amounts of memory, frequently due to programming idioms that are used to make software more reliable, maintainable and understandable. In the case of modern object-oriented systems this problem is partly due to creation of large numbers of co-existing *isomorphic* objects. Intuitively, two objects are isomorphic if they are of the same type, have identical values in corresponding primitive fields, and are such that corresponding reference fields themselves point to isomorphic objects. In other words, the portions of memory rooted at the two objects are isomorphic shape-wise as well as values-wise. A significant reduction in heap usage can therefore be achieved if the code is refactored to *de-duplicate* or *share* objects whenever possible instead of always creating distinct but possibly isomorphic objects. Such a refactoring, which employs a cache to keep track of objects created so far and to share them, is termed as object-sharing refactoring. In practice, object-sharing refactoring is commonly used, as it directly impacts the attributes of software that end users care about - memory utilization and performance. However, manual refactoring is tedious and error prone.

To support object-sharing refactoring we have (1) designed and implemented an approach to estimate memory-savings potential due to this refactoring, (2) espoused the idea of *full initialization points* of objects, and a static analysis to identify these points, and (3) proposed a scalable refinement-based points-to analysis for verifying the safety of object-sharing refactoring, but which has general applicability to several other verification tasks as well.

(1) We present a dynamic analysis technique for *estimating* for all the allocation sites in a program, for a given input, the reduction in heap memory usage (in bytes, or as a percentage) to be obtained by employing object sharing at each site. The quantitative estimates produced by our technique of a user-observable benefit (i.e., actual memory savings) make it easier for developers to select sites to refactor. Experimentation with our estimation tool on real Java programs indicate that nearly all applications have potential for reduction of memory usage by object sharing, with up to 37% estimated reduction in heap footprint in the best case.

(2) We define a novel concept termed full-initialization points (FIPs) to characterize the points in the program where objects allocated at any chosen allocation site become fully initialized. This notion supports object-sharing refactoring as follows - the FIPs are good candidate program points where the cache lookup can be performed to find isomorphic objects and share them. Additionally, we argue that FIPs are useful in the context of other allocation-site refactorings, such as immutability refactoring, and refactoring to the builder pattern. We present a novel and conservative static analysis to detect FIPs for a given allocation site. By introducing code to cache and share objects at the FIPs suggested by our analysis, object-sharing refactoring was able to obtain a mean memory savings of 11.4% on a set of real Java benchmarks. Immutability refactoring guided by our analysis achieved a mean runtime speedup of 1.6X compared to performing the same refactoring using a baseline approach that did not make use of FIPs.

(3) A standard points-to analysis approach, namely, the object sensitivity approach, uses an equal level of precision to represent symbolic objects allocated at all allocation sites in a program. This approach does not scale to large programs unless low levels of precision are used. We show how to use a user's query of interest to identify a limited set of allocation sites that need a higher level of precision, and use higher precision only at these sites. We have implemented our approach. Experiments using our implementation show that for the task of checking safety of applying object-sharing refactoring at each site, our approach gives mean improvement in precision of 4.4% over the previous approach, on a set of real benchmarks. For another well-known verification task, namely downcast safety analysis, our approach results in mean precision improvement of 11.9% over the previous approach.

Speaker Bio:
Girish Rama is employed at Infosys Ltd., and is a PhD student at CSA, IISc. His research interests are in software refactoring, software modularity and metrics, and program analysis techniques.

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Power Issues in SoCs: Power Aware DFT Architecture and Power Estimation

  • Speaker: Mr. Jaynarayan Thakurdas Tudu
                   Ph.D student
                   Dept. of CSA, IISc
  • Faculty Advisor: Prof. Matthew Jacob Thazhuthaveetil
  • Date and Time: Friday, July 28, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Test power, data volume, and test time have been long-standing problems for sequential scan based testing of system-on-chip (SoC) design. The modern SoCs fabricated at lower technology nodes are complex in nature, the transistor count is as large as billions of gate for some of the microprocessors. The design complexity is further projected to increase in the coming years in accordance with Moore’s law. The larger gate count and integration of multiple functionalities are the causes for higher test power dissipation, test time and data volume. The dynamic power dissipation during scan testing, i.e. during scan shift, launch and response capture, are major concerns for reliable as well as cost effective testing. Excessive average power dissipation leads to a thermal problem which causes burn-out of the chip during testing. Peak power on other hand causes test failure due to power induced additional delay. The test failure has direct impact on yield. The test power problem in modern 3D stacked based IC is even a more serious issue. Estimating the worst case functional power dissipation is yet another great challenge. The worst case functional power estimation is necessary because it gives an upper bound on the functional power dissipation which can further be used to determine the safe power zone for the test.

Several solutions in the past have been proposed to address these issues. In this thesis we have three major contributions: 1) Sequential scan chain reordering, and 2) JScan-an alternative Joint-scan DFT architecture to address primarily the test power issues along with test time and data volume, and 3) an integer linear programming methodology to address the power estimation problem. In order to reduce test power during shift, we have proposed a graph theoretic formulation for scan chain reordering and for optimum scan shift operation. For each formulation a set of algorithms is proposed. The experimental results on ISCAS-89 benchmark circuit show a reduction of around 25% and 15% in peak power and scan shift time respectively.

In order to have a holistic DFT architecture which could solve test power, test time, and data volume problems, a new DFT architecture called Joint-scan (JScan) have been developed. In JScan we have integrated the serial and random access scan architectures in a systematic way by which the JScan could harness the respective advantages from each of the architectures. The serial scan architecture suffers from test power, test time, and data volume problems. However, the serial scan is simple in terms of its functionality and is cost effective in terms of DFT circuitry. Whereas, the random access scan architecture is opposite to this; it is power efficient and it takes lesser time and data volume compared to serial scan. However, the random access scan occupies larger DFT area and introduces routing congestion. Therefore, we have proposed a methodology to realize the JScan architecture as an efficient alternative for standard serial and random access scan. Further, the JScan architecture is optimized and it resulted into a 2-Mode 2M-Jscan Joint-scan architecture. The proposed architectures are experimentally verified on larger benchmark circuits and compared with existing state of the art DFT architectures. The results show a reduction of 50% to 80% in test power and 30% to 50% in test time and data volume. The proposed architectures are also evaluated for routing area minimization and we obtained a saving of around 7% to 15% of chip area.

Estimating the worst case functional power being a challenging problem, we have proposed a binary integer linear programming (BILP) based methodology. Two different formulations have been proposed considering the different delay models namely zero-delay and unit-delay. The proposed methodology generates a pair or input vectors which could toggle the circuit to dissipate worst power. The BILP problems are solved using CPLEX solver for ISCAS-85 combinational benchmark circuits. For some of the circuits, the proposed methodology provided the worst possible power dissipation i.e. 80 to 100% toggling in nets.

Video Archive Go Top

 

 

 

 

 

 

 

 

 

 

Copyright: CSA, IISc 2017      Phone: +91-80-22932368          Fax: +91-80-23602911 Travel Blog    Feedback    Credits