Events 

Seminars 

Prof. I. G. Sarma Memorial Lecture Series 

Prof. Priti Shankar Memorial Seminar Series 

Multimedia Archive 



UPCOMING SEMINARS 

Series: Ph.D. Colloquium Title: Deep Learning with Minimal Supervision  Speaker: Mr. Gaurav Pandey
Ph.D student
Dept. of CSA  Faculty Advisor: Prof. Ambedkar Dukkipati
 Date and Time: Tuesday, September 26, 2017, 12:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract In recent years, deep neural networks have achieved extraordinary performance
on supervised learning tasks. Convolutional neural networks (CNN) have vastly
improved the state of the art for most computer vision tasks including object
recognition and segmentation. However, their success relies on the presence of a
large amount of labeled data. In contrast, relatively fewer work has been done in
deep learning to handle scenarios when access to ground truth is limited, partial
or completely absent. In this thesis, we propose some deep neural network models to handle
challenging problems with limited labeled information.
Our first contribution is a neural architecture that allows for the extraction of
infinitely many features from an object while allowing for tractable inference. This
is achieved by using the `kernel trick', that is, we express the inner product in the
infinite dimensional feature space as a kernel. The kernel can either be computed
exactly for single layer feedforward networks, or approximated by an iterative algorithm
for deep convolutional networks. The corresponding models are referred to as stretched
deep networks (SDN). We show that when the amount of training data is limited, SDNs with
random weights drastically outperform fully supervised CNNs with similar architectures.
While SDNs perform reasonably well for classification with limited labeled data, they can
not utilize unlabeled data which is often much easier to obtain. A common approach to utilize
unlabeled data is to couple the classifier with an autoencoder (or its variants) thereby
minimizing reconstruction error in addition to the classification error. We discuss the limitations
of decoderbased architectures and propose a model that allows for the utilization of unlabeled
data without the need of a decoder. This is achieved by jointly modeling the distribution of data
and latent features in a manner that explicitly assigns zero probability to unobserved data. The
joint probability of the data and the latent features is maximized using a two step EMlike
procedure. Depending on the task, we allow the latent features to be onehot or realvalued
vectors and define a suitable prior on the features. For instance, onehot features correspond
to class labels and are directly used for the unsupervised and semisupervised classification
task, whereas realvalued feature vectors are fed as input to simple classifiers for auxiliary
supervised discrimination tasks. The proposed model, which we refer to as discriminative encoder
(or DisCoder), is flexible in the type of latent features that it can capture. The proposed model
achieves stateoftheart performance on several challenging datasets.
Having addressed the problem of utilizing unlabeled data for classification, we move to a domain
where obtaining labels is a lot more expensive, that is, semantic image segmentation. Explicitly
labeling each pixel of an image with the object that the pixel belongs to, is an expensive operation,
in terms of time as well as effort.
Currently, only a few classes of images have been densely (pixellevel) labeled.
Even among these classes, only a few images per class have pixellevel supervision.
Models that rely on denselylabeled images, can not utilize a much larger set of weakly
annotated images available on the web. Moreover, these models can not learn the segmentation
masks for new classes, where there is no densely labeled data.
Hence, we propose a model for utilizing weaklylabeled data for semantic segmentation of
images. This is achieved by generating fake labels for each image, while simultaneously
forcing the output of the CNN to satisfy the meanfield constraints imposed by a conditional
random field. We show that one can enforce the CRF constraints by forcing the distribution at
each pixel to be close to the distribution of its neighbors.
The proposed model is very fast to train and achieves stateoftheart performance
on the popular VOC2012 dataset for the task of weakly supervised semantic image segmentation.
 Series: Department Seminar Title: SOPER : Discovering the Influence of Fashion and the Many Faces of User from
Session Logs using Stick Breaking Process  Speaker: Ms. Lucky Dhakad
Flipkart  Date and Time: Monday, September 25, 2017, 2:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Recommending lifestyle articles is of immediate interest to the ecommerce industry
and is beginning to attract research attention. Often followed strategies, such as
recommending popular items are inadequate for this vertical because of two reasons.
Firstly, users have their own personal preference over items, referred to as personal
styles, which lead to the longtail phenomenon. Secondly, each user displays multiple
personas, each persona has a preference over items which could be dictated by a
particular occasion, e.g. dressing for a party would be different from dressing to
go to office. Recommendation in this vertical is crucially dependent on discovering
styles for each of the multiple personas. There is no literature which addresses this problem.
We posit a generative model which describes each user by a Simplex Over PERsona, SOPER,
where a persona is described as the individuals preferences over prevailing styles
modeled as topics over items. The choice of simplex and the longtail nature
necessitates the use of stickbreaking process. The main technical contribution is an
efficient collapsed Gibbs sampling based algorithm for solving the attendant inference problem.
Trained on largescale interaction logs spanning more than halfamillion sessions collected
from an ecommerce portal, SOPER outperforms previous baselines by a large margin of 35%
in identifying persona. Consequently it outperforms several competitive baselines
comprehensively on the task of recommending from a catalogue of roughly 150 thousand
lifestyle articles, by improving the recommendation quality as measured by AUC by a
staggering 12.23%, in addition to aiding the interpretability of uncovered personal and
fashionable styles thus advancing our precise understanding of the underlying phenomena.
Speaker Bio: Lucky Dhakad completed her ME from CSA, IISc in 2016 and is presently working with Flipkart.
*This is a practice talk for CIKM 2017*
Host Faculty: Prof. Chiranjib Bhattacharyya
 Series: M.Sc. (Engg) Thesis Defense Title: A Study of Thompson Sampling Approach for Sleeping
MultiArmed Bandit Problems  Speaker: Mr. Aritra Chatterjee
M.Sc. (Engg) Student
Dept of CSA
IISc  Faculty Advisor: Prof. Y. Narahari
 Date and Time: Monday, September 25, 2017, 11:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract The Multiarmed bandit (MAB) problem provides a
convenient abstraction for many online learning problems
arising in modern applications including Internet advertising,
crowdsourcing, online procurement, smart grids, etc. Several
variants of the MAB problem have been proposed to extend
the basic model to a variety of practical and general settings.
The sleeping multiarmed (SMAB) problem is one such
variant where the subset of available arms varies with time.
This study is focused on analyzing the efficacy of the
Thompson Sampling algorithm for solving the SMAB problem.
Any algorithm for the classical MAB problem is expected to
choose one of K available arms (actions) in each of T
consecutive rounds. Each choice of an arm generates a
stochastic reward from an unknown but fixed distribution.
The goal of the algorithm is to maximize the expected sum of
rewards over the T rounds (or equivalently minimize the
expected total regret), relative to the bestfixed action in
hindsight. In many of these settings, however, not all arms
may be available in any given round. For example, in Internet
advertising, some advertisers might choose to stay away from
the auction due to budget constraints; in crowdsourcing,
some workers may not be available at a given time due to
timezone difference, etc. This implies that the algorithm
needs to work only with the set of available arms. Further,
unlike in the classical setting, the performance of an algorithm
can no longer be evaluated relative to the bestfixed arm in
hindsight; the regret is now to be measured relative to the
best "available" arm in hindsight. One possible way is to
compute the overall regret as the worstcase regret over all
possible sample paths of availabilities (that is, under
adversarial selection of available arms).
In the literature, upper confidence bound (UCB)based
approaches have been proposed and investigated for the
SMAB problem. Our contribution is to investigate a
Thompson samplingbased approach. Our key finding is to
establish a logarithmic regret bound, which nontrivially
generalizes a similar bound known for this approach in the
classical MAB setting. Our bound also matches (up to
constants) the bestknown lower bound for the SMAB
problem. And, we show via detailed simulations, that the
Thompson Sampling approach, in fact, outperforms the
known algorithms for the SMAB problem.


PAST SEMINARS 

Series: Department Seminar Title: Maliciously Secure Oblivious Linear Function Evaluation with Constant Overhead  Speaker: Mr. Satrajit Ghosh
PhD Student
Cryptography and Security Group
Aarhus University
Denmark  Date and Time: Thursday, September 21, 2017, 3:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract In this work we consider the problem of oblivious linear function evaluation (OLE). OLE is a
special case of oblivious polynomial evaluation (OPE) and deals with the oblivious evaluation
of a linear function $f(x)=ax+b$. This problem is nontrivial in the sense that the sender
chooses $a,b$ and the receiver $x$, but the receiver may only learn $f(x)$. We present a highly
efficient and UCsecure construction of OLE in the OThybrid model that requires only $O(1)$ OTs
per OLE. The construction is based on noisy encodings introduced by Naor and Pinkas (STOC'99) and
used for passive secure OLE's by Ishai, Prabhakaran and Sahai (TCC'09). A result asymptotically
similar to ours is known by applying the IPS compiler to the mentioned passive secure OLE protocol,
but our protocol provides better constants and would be considerably simpler to implement. Concretely
we use only $16$ OTs to generate one active secure OLE, and our protocol achieves active security
by adding fairly simple checks to the passive secure protocol. We therefore believe our protocol
takes an important step towards basing practical activesecure arithmetic computations on OLEs.
Our result requires novel techniques that might be of independent interest. As an application we
present the currently most efficient OPE construction.
Speaker Bio: Satrajit Ghosh is doing his PhD at Cryptography and Security Group, Aarhus University, Denmark
under supervision of Jesper Buus Nielsen and Claudio Orlandi. He did his Masters from the Computer Science and Engineering Department, IIT Kharagpur under the guidance of
Dr. Abhijit Das. Prior to that, he did his B.Tech in Computer Science and Engineering and B.Sc (Honors)
in Physics from University of Calcutta.
Before joining Aarhus University, he also worked as researcher at the R. C. Bose Centre of
Cryptology and Security, Indian Statistical Institute, Kolkata and in Saarland University, Germany.
Host Faculty: Dr. Arpita Patra
 Series: M.Sc. (Engg) Thesis Defense Title: Improved lower bounds for depth four arithmetic circuits.  Speaker: Mr. Abhijat Sharma, M.Sc. (Engg)
 Faculty Advisor: Dr . Chandan Saha
 Date and Time: Wednesday, September 20, 2017, 10:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract We study the problem of proving lower bounds for depth four arithmetic
circuits. Depth four circuits have been receiving much attraction when it comes
to recent circuit lower bound results, as a result of the series of results
culminating in a proof that strong enough lower bounds for depth four circuits
will imply superpolynomial lower bounds for general arithmetic circuits, and
hence solve one of the most central open problems in algebraic complexity i.e a
separation between the VP and VNP complexity classes.
However despite several efforts, the best known lower bounds for general
circuits remains Omega(N logN), where N is the number of input variables.
In the case of arithmetic formulas, an O(N^2) lower bound is known,
which has not seen any improvement since then.
The situation for depth three arithmetic circuits was also similar for
many years, until a recent result that achieved an almost cubic lower bound
improving upon the previous best quadratic lower bound.
However, unlike depth three circuits, proving a superquadratic lower
bound for depth four circuits remains a prevalent open problem for many years.
Previous results implied a superlinear lower bound of Omega(N^{1.33})
for depth four circuits. As the main contribution of this thesis, we improve
upon this superlinear bound and prove an Omega(N^(1.5)) lower bound on the
size of depth four circuits, computing an explicit Nvariate polynomial in VNP
with degree d = O(sqrt{N}). Our approach offers a potential route to proving a superquadratic lower
bound for depth four circuits. Taking cue from the numerous successful results
recently, we use the technique of the shifted partial derivatives measures to
achieve the said lower bound. Particularly, we use the Dimension of Projected Shifted Partials (DPSP)
measure. Coming to the choice of the hard polynomial, we again follow the
status quo and use a variant of the NisanWigderson (NW) polynomial family that
has proved to be very helpful over the past few years in arithmetic circuit
complexity.
Finally, we do a careful analysis of two previous work on general depth
four circuits and compare their techniques to ours.
We conclude that our result can potentially be used as a starting point,
and techniques similar to that of the almost cubic lower bound result for depth
three circuits can likely be used to strengthen our lower bound to
Omega(N^(2.5)), for general depth four arithmetic circuits.
 Series: Department Seminar Title: SymSum: SymmetricSum Distinguishers Against Round Reduced SHA3  Speaker: Dr. Dhiman Saha
 Date and Time: Friday, September 15, 2017, 12:00 PM
 Venue: CSA Multimedia Class (Room No. 252, First Floor)
Abstract Cryptographic Hash Functions are ubiquitous in our daytoday digital lives. Primarily they ensure data integrity which is one of the fundamental crypto goals making the analysis of these hash functions imperative. After briefly introducing the research area, this talk will focus on devising “Distinguishers” of the latest cryptographic hash function – SHA3. “Distinguishers” constitute a specific paradigm of analyzing hash functions that deals with exhibiting nonrandom behavior. The talk will zoomin to the domain of higherorder derivatives of Boolean functions and its applications to analysis of SHA3 in the form of the ZeroSum property. Finally, our latest research result which proposes a new class of Distinguishers of SHA3 will be presented. The technique presented relies on a variant of classical higher order Boolean derivatives over special subspaces. It exploits internal symmetry of the hash function to come up with the best distinguisher on SHA3 penetrating up to 9 rounds that succeeds with probability one and outperforms the closest competitor, in terms of effort, by a factor of 4.
Speaker Bio: Dhiman Saha is a Research Associate in the Crypto Research Lab, in the Department of Computer Science and Engineering at IIT Kharagpur. He recently received his PhD from the department with his broad area of research encompassing both theoretical and physical aspects of Cryptography. As a part of his thesis he worked on the cryptanalysis of hash functions and authenticated ciphers. He has broken three authenticated encryption schemes submitted to the ongoing CAESAR competition and has been involved in the analysis of latest cryptographic hash algorithm  SHA3. Before joining for his PhD he worked in the Electronic Design Automation industry for over two years. He completed his MS degree from IIT Kharagpur in 2009 with a thesis on Hardware Security. He received his BE from NIT Agartala in 2006 and was awarded the Gold Medal for Computer Science and Engineering. He was also recipient of the NEC Scholarship during his BE. Along with hash functions and authenticated ciphers, his recent research interests include memoryhard functions and proofsofwork.
Host Faculty: Dr. Sanjit Chatterjee
 Series: Department Seminar Title: Provably Efficient Scheduling of CacheOblivious Wavefront Algorithms  Speaker: Pramod Ganapathi
 Date and Time: Tuesday, September 12, 2017, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Iterative wavefront algorithms for evaluating dynamic programming recurrences exploit optimal parallelism but show poor cache performance. Tilediterative wavefront algorithms achieve optimal cache complexity and high parallelism but are cacheaware and hence are not portable and not cacheadaptive. On the other hand, standard cacheoblivious recursive divideandconquer algorithms have optimal serial cache complexity but oft‰en have low parallelism due to arti€ficial dependencies among subtasks. We present a generic algorithmic framework to systematically transform standard cacheoblivious recursive divideandconquer algorithms into recursive wavefront algorithms to achieve optimal parallel cache complexity and high parallelism under stateoftheart schedulers for forkjoin programs. The recursive wavefront algorithms are efficient both in theory and in practice.
Speaker Bio: Pramod Ganapathi received his PhD in Computer Science, specializing in algorithms, from Stony Brook University, New York, USA in 2016. He was supervised by Prof. Rezaul Chowdhury. During his PhD, he designed algorithms (and frameworks) that can automatically (and semiautomatically) discover efficient recursive divideandconquer algorithms for a wide class of dynamic programming problems. Currently, he is the founder and CEO of a startup called Learning is Beautiful, based in Bengaluru. The company aims to teach hardcore problemsolving and deep thinking on higher education topics through simple and intuitive animated videos.
Host Faculty: Uday Kumar Reddy B
 Series: CSA Distinguished Lecture Title: Technology Disruptions and Innovation at Intersections  Speaker: Mr. K Ananth Krishnan
Chief Technology Officer
Tata Consultancy Services (TCS)  Date and Time: Friday, September 08, 2017, 11:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract What is the view from the CTO of a company at the intersection of technology
and the businesses of the 1000 biggest corporations globally across 15 industries?
Is the global industry hungry for research breakthroughs?
What are the big disruptions across industries, and how are these triggered?
How are industries responding, and which companies will succeed?
Which areas of research are most in demand, and are moving at great speed
from idea to impact?
How is TCS Research and Innovation responding to this market? Ananth directs TCS’
Research and Innovation and will highlight the current scenario and provide
real world examples from around the world. He will also explain why the sciences
are intersecting with computing, to drive new ideas and creating value.
Speaker Bio: As CTO, Ananth directs research and innovation in Tata Consultancy Services. Ananth
has architected a 4E model to make invention, innovation and coinnovation at TCS
deliver value to TCS business and its customers. Under his leadership the TCS research
community has created a significant portfolio of patents, papers and IP.
Ananth has been a member of the TCS Corporate ThinkTank since 1999, and has led several
strategic initiatives. He has been a Principal Architect and Lead Consultant in the
Architecture and Technology Consulting Practice, and earlier the head of the TCS Systems
Management and the Systems Software Group.
Ananth was elected a Fellow at the Indian Academy of Engineering (INAE) in recognition
of his contributions towards engineering in 2013. He is a Senior Member of the IEEE and a
member of the Computer Society of India, and is an invited faculty in the Department of
Management Studies at the Indian Institute of Technology, Madras.
Named a Distinguished Alumnus of the Indian Institute of Technology, Delhi in 2009, Ananth
has been listed in Computerworld’s Premier 100 IT Leaders (2007). He has also been chosen as
one of Infoworld’s Top 25 CTOs (2007).
Ananth is an M. Tech. in Computer Science from the Indian Institute of Technology, Delhi. He
also has a Masters degree in Physics from the same Institute and a Bachelor's degree in Physics
from Fergusson College, Pune.
Host Faculty: Prof. Y Narahari
 Series: Department Seminar Title: Intercepting Functions for Memoization  Speaker: Arjun Suresh
 Date and Time: Friday, September 01, 2017, 3:00 PM
 Venue: CSA Multimedia Class (Room No. 252, First Floor)
Abstract In this talk, we present mechanisms to implement function memoization at a software level. We analyze the potential of function memoization on applications and its performance gain on current architectures. We propose three schemes – a simple loadtime approach that works for any dynamically linked function, a compiletime approach using LLVM framework which can enable memoization for any program function, and finally a hardware proposal for performing memoization in hardware and its potential benefits. Demonstration of the linktime approach with transcendental functions showed that memoization is applicable and gives good benefit even under modern architectures and compilers (with the restriction that it can be applied only for dynamically linked functions). Our compiletime approach extends the scope of memoization and also increases the benefit due to memoization. This works for both userdefined functions as well as library functions. It can handle certain kind of non pure functions like those functions with pointer arguments and global variable usage. Hardware memoization lowers the threshold for a function to be memoized and gives more performance gain on average.
Speaker Bio: Arjun Suresh is currently working on certain extensions of GATE Overflow, a complete exam preparation package for GATE preparation for the Computer Science stream. Before this, he did his postdoctoral research at The Ohio State University on highperformance GPU implementations of tensor algebra operations. His doctoral studies were completed under the guidance of Dr. Erven Rohou in the ALF team at INRIA, Rennes, and his Ph.D. thesis was titled "Intercepting Functions for Memoization". Prior to that, he had obtained his Masters in Engineering from the Dept of CSA at the Indian Institute of Science under the guidance of Prof. Uday Kumar Reddy B, and his Bachelors' from the College of Engineering, Trivandrum.
Host Faculty: Uday Kumar Reddy B
 Series: CSA Distinguished Lecture Title: Unleashing the Potential of Heterogeneous Computing for Mobile and IoT Devices  Speaker: Tulika Mitra
Professor of Computer Science
School of Computing
National University of Singapore  Date and Time: Friday, September 01, 2017, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Transistor count continues to increase for silicon devices following Moore’s Law. But the failure of Dennard scaling
has brought the computing community to a crossroad where power/thermal has become the major limiting factor, specially
in mobile and IoT devices with constrained form factors. Thus future chips in these devices can have many cores; but
only a fraction of them can be switched on at any point in time. This dark silicon era is driving the emergence of
heterogeneous multicore architectures consisting of cores with diverse computational capabilities and powerperformance
characteristics enabling better match between the application requirements and the compute engine leading to substantially
improved energyefficiency. In this talk, I will introduce the technology trends driving heterogeneous multicores, provide an overview of computationally divergent and performance heterogeneous multicores, and present my research group's effort
in architecture, compiler, and runtime support to fully realize the potential of heterogeneity towards energyefficient computing.
Speaker Bio: Tulika Mitra is a Professor of Computer Science at School of Computing, National University of Singapore (NUS). She received her PhD from the State University of New York at Stony Brook (2000) and M.E. from the Indian Institute of Science (1997). Her research interests span various aspects of the design automation of embedded realtime systems, cyberphysical systems (CPS), and Internet of Things (IoT) with particular emphasis on energyefficient computing, heterogeneous computing, applicationspecific processors, and software timing analysis/optimizations. She has authored over hundred scientific publications in leading international journals and conferences in this domain. Her research has been recognized by best paper award at FPT 2012 and best paper nominations at DAC (2016, 2012, 2009), DATE, VLSI Design, CODES+ISSS, FPL, ECRTS, CASES. Prof. Mitra serves as Senior Associate Editor of the ACM Transactions on Embedded Computing Systems, Deputy EditorinChief of IEEE Embedded Systems Letters, Associate Editor of IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, IEEE Design & Test Magazine, EURASIP Journal on Embedded Systems, and IET Computers & Digital Techniques. She has served in the organizing and program committees of several major conferences in embedded systems, realtime systems, and electronic design automation.
Host Faculty: R. Govindarajan
 Series: M.Sc. (Engg) Thesis Defense Title: Fast Actively Secure OT Extension for Short Secrets  Speaker: Mr. Ajith S
M.Sc. (Engg) Student
Dept. of CSA
IISc  Faculty Advisor: Dr. Arpita Patra
 Date and Time: Thursday, August 31, 2017, 4:30 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Oblivious Transfer (OT) is one of the most fundamental cryptographic primitives
with widespread application in general secure multiparty computation (MPC) as
well as in a number of tailored and specialpurpose problems of interest such
as private set intersection (PSI), private information retrieval (PIR), contract
signing to name a few. Often the instantiations of OT require prohibitive
communication and computation complexity. OT extension protocols are introduced
to compute a very large number of OTs referred as extended OTs at the cost of a
small number of OTs referred as seed OTs.
We present a fast OT extension protocol for small secrets in the active setting. Our protocol
when used to produce 1outofn OTs outperforms all the known actively secure OT
extensions. Our protocol is built on the semihonest secure extension protocol of Kolesnikov
and Kumaresan of CRYPTO’13 (referred as KK13 protocol henceforth) which is the bestknown OT
extension for short secrets. At the heart of our protocol lies an efficient consistency
checking mechanism that relies on the linearity of Walsh Hadamard (WH) codes. Asymptotically,
our protocol adds a communication overhead of O(µ log κ) bits over KK13 protocol irrespective
of the number of extended OTs, where κ and µ refer to computational and statistical security
parameter respectively. Concretely, our protocol when used to generate a large enough
number of OTs adds only 0.0110.028% communication overhead and 46% runtime overhead
both in LAN and WAN over KK13 extension. The runtime overheads drop below 2% when in
addition the number of inputs of the sender in the extended OTs is large enough.
As an application of our proposed extension protocol, we show that it can be used to
obtain the most efficient PSI protocol secure against a malicious receiver and a semihonest sender.
 Series: Department Seminar Title: Approximating Geometric Knapsack via Lpackings  Speaker: Dr. Arindam Khan
Researcher, IDSIA SUPSI
Switzerland  Date and Time: Monday, August 21, 2017, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract We study the twodimensional geometric knapsack problem (2DK), a geometric
variant of the classical knapsack problem. In this problem, we are given a
set of axisaligned rectangular items, each one with an associated profit,
and an axisaligned square knapsack. The goal is to find a (nonoverlapping)
packing of a maximum profit subset of items inside the knapsack without
rotating items. This is a very wellstudied optimization problem and finds
applications in scheduling, memory allocation, advertisement placement etc.
The bestknown polynomialtime approximation factor for this problem (even just
in the cardinality case) is $2+eps$ [Jansen and Zhang, SODA 2004].
After more than a decade, in this paper we break the 2approximation barrier,
achieving a polynomialtime $17/9+eps<1.89$ approximation, which improves to
$558/325+eps<1.72$ in the cardinality case.
We also consider the variant of the problem with rotations (2DKR) , where the
items can be rotated by $90$ degrees. Also in this case the bestknown polynomialtime
approximation factor (even for the cardinality case) is $2+eps$ [Jansen and Zhang,
SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional
ideas, we obtain a polynomialtime $3/2+eps$approximation for 2DKR, which improves to $4/3+eps$ in the cardinality case.
This is a joint work with Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich,
Salvatore Ingala and Andreas Wiese. This result will appear in FOCS 2017.
Speaker Bio: Arindam Khan is a researcher in IDSIA, SUPSI in Lugano, Switzerland. His
research areas include approximation algorithms, online algorithms and
computational geometry. He has obtained his PhD in Algorithms, Combinatorics
and Optimization (ACO) from Georgia Institute of Technology, Atlanta, USA under
Prof. Prasad Tetali. Previously he has been a research intern in Theory group,
Microsoft Research Redmond and Microsoft Research Silicon Valley USA, a visiting
researcher at Simons Institute, Berkeley, USA and a blue scholar in IBM Research India.
Host Faculty: Dr. Anand Louis
 Series: Ph.D. Thesis Defense Title: Number Theoretic, Computational and
Cryptographic Aspects of a Certain Sequence of Arithmetic Progressions  Speaker: Mr. Cherukapally Srikanth
Ph.D student
Dept. of CSA  Faculty Advisor: Prof. C.E. Veni Madhavan & Dr. Sanjit Chatterjee
 Date and Time: Wednesday, August 09, 2017, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract This thesis introduces a new mathematical object: collection of
arithmetic progressions with elements satisfying the inverse property,
``jth terms of ith and (i+1)th progressions are multiplicative
inverses of each other modulo (j+1)th term of ith progression''.
Such a collection is uniquely defined for any pair $(a,d)$ of coprime integers.
The progressions of the collection are ordered. Thus we call it a sequence rather
than a collection. The results of the thesis are on the following number theoretic, computational
and cryptographic aspects of the defined sequence and its generalizations.
The sequence is closely connected to the classical Euclidean
algorithm. Precisely, certain consecutive progressions of the sequence
form ``groupings''. The difference between the common differences of
any two consecutive progressions of a grouping is same. The number of
progressions in a grouping is connected to the quotient sequence
of the Euclidean algorithm on coprime input pairs. The research community
has studied extensively the behavior of the Euclidean algorithm. For the first
time in the literature, the connection (proven in the thesis) shows
what the quotients of the algorithm signify. Further, the leading terms of progressions
within groupings satisfy a mirror image symmetry property,
called ``symmetricity''. The property is subject to the quotient
sequence of the Euclidean algorithm and
divisors of integers of the form $x^2y^2$ falling in specific intervals.
The integers $a$, $d$ are the primary quantities of the defined sequence in a computational sense.
Given the two, leading term and common difference of any progression of the sequence
can be computed in time quadratic in the binary length of $d$.
On the other hand, the inverse computational question of
finding $(a,d)$, given information on some terms of the sequence, is interesting.
This problem turns out to be hard as it requires finding solutions to
an nearlydetermined system of multivariate polynomial equations. Two subproblems
arising in this context are shown to be equivalent to the problem of factoring integers. The reduction
to the factoring problem, in both cases, is probabilistic.
Utilizing the computational difficulty of solving the inverse problem, and the subproblems (mentioned above),
we propose a symmetrickey cryptographic scheme (SKCS),
and a public key cryptographic scheme (PKCS). The PKCS is also based on the hardness of the problem of
finding squareroots modulo composite integers. Our proposal uses the same
algorithmic and computational primitives
for effecting both the PKCS and SKCS. In addition, we use the notion of
the sequence of arithmetic progressions to design an entity authentication scheme.
The proof of equivalence between one of the inverse computational problems (mentioned above)
and integer factoring led us to formulate and investigate
an independent problem concerning the largest divisor of integer $N$ bounded by
the squareroot of $N$. We present some algorithmic and combinatorial results.
In the course of the above investigations, we are led to certain open questions of number theoretic,
combinatorial and algorithmic nature. These pertain to the quotient sequence of the Euclidean algorithm,
divisors of integers of the form $x^2y^2$ in specific intervals,
and the largest divisor of integer $N$ bounded by its squareroot.
 Series: Ph.D. Colloquium Title: Approximation Algorithms for Geometric Packing and Covering Problems  Speaker: Mr. Aniket Basu Roy
PhD student at CSA department  Faculty Advisor: Prof. Sathish Govindarajan
 Date and Time: Monday, August 07, 2017, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract We study a host of Geometric optimization problems that are NPhard and design polynomial
time approximation algorithms for them. More precisely, we are given a family of geometric
objects and a point set, mostly in the plane, and study different variants and generalizations
of Packing and Covering problems. Our objects of study are mostly family of nonpiercing regions
in the plane. We call a set of simple and connected regions to be nonpiercing if for any pair of
intersecting regions A and B their set difference AB is a connected region e.g., disks,
halfplanes etc. Whereas, lines and rectangles are examples of piercing objects. For most of
the problems we have studied, a simple local search algorithm is enough to yield a PTAS whose
analysis of approximation requires a suitable bipartite graph on the local search solution and
the optimal solution to have balanced sublinear separators.
We study a generalization of the standard packing problem, called the capacitated packing problem
and its slight variant, the shallow packing problem. We devise a PTAS for both these problems for
constant capacities. For the former problem, the objects are nonpiercing whereas the objects can
be even more general and only have subquadratic monotonic union complexity for the latter problem.
The nontriviality here is to show that the intersection graph of arrangements with shallow depth,
which is not planar, has balanced sublinear separators. Our results complement the Maximum Independent
Set of Rectangles problem as rectangles are both piercing and have quadratic union complexity. We
also study the capacitated packing problem for the dual set system and are able to show that local
search works here as well for unit capacity and devise a constant factor approximation algorithm
using BronnimannGoodrich technique which is inspired by the method of multiplicative weight updates.
Runaway Rectangle Escape problem is closely related to the above packing problems and is motivated
from routing in printed circuit boards. Here we are given a set of axisparallel rectangles inside a
rectangular boundary and a maximum allowed depth d. The objective is to extend the maximum number
of input rectangles to one of the four directions such that the maximum depth of a point is at
most d after extension. We devise several approximation algorithms and a hardness result.
We study different variants of the covering problem like the Unique Coverage, MultiCovering,
Prize Collecting Set Cover, Exact Cover, and MultiHitting Set. For the first three problems,
we show that local search yields a PTAS for nonpiercing regions under some assumptions. For the
Exact Cover problem, we prove that even finding a feasible solution is NPhard for objects as
special as unit squares. For the MultiHitting Set problem, we propose constant factor
approximation algorithm for nonpiercing regions using the fact that they have shallow dualcell complexity.
Lastly, we consider variants of the Art Gallery problems called the Minimum (Horizontal)
Sliding Cameras problem, M(H)SC. We are given an orthogonal polygon and we need to deploy
mobile guards who can walk along an orthogonal (horizontal) line segment and can guard a point
inside the polygon if the perpendicular drawn from the point onto the line segment lies inside
the polygon. Our local search algorithm yields a PTAS for the MHSC problem and also the MSC
problem when the polygon has no holes. Here again, the nontriviality is to prove that an
appropriate graph on orthogonal line segments is planar. We also observe that the MSC problem
for polygons with holes is APXhard.
 Series: CSA Distinguished Lecture Title: MULTIOBJECTIVE NAVIGATION IN UNCERTAINTY  Speaker: Professor Krishna R. Pattipati
University of Connecticut
Storrs, Connecticut
USA  Date and Time: Thursday, August 03, 2017, 3:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Routing in uncertain environments involves many contextual elements, such as different environmental
conditions (ensemble forecasts with varying spatial and temporal uncertainty), multiple objectives,
changes in mission goals while en route (e.g., training requirements, popup threats, humanitarian aid),
and asset status. In this walk, we focus on robust planning under uncertainty by exploiting weather
forecast ensembles and realizations using TMPLAR, a Tool for Multiobjective PLanning and Asset Routing,
in the context of 3D and 4D path navigation. Our approach includes solving for mbest shortest paths
for each weather forecast realization via Murty's search space partitioning strategy and evaluating
the mean, variance, and signaltonoise ratio (SNR) of the paths over all weather possibilities.
The robust path is one that, for example, minimizes the root mean square (RMS) value or one that
maximizes the SNR, given the possible forecast realizations. In a complementary effort to compact the
featurerich multidimensional navigational problem space, we develop a selforganizing map
(SOM)based data reduction scheme that filters complex contexts and highlights only the key
impacting features within a given information space. We demonstrate the utility of our approaches
via application to a realworld shipping tragedy, using the weather forecast realizations available prior to the event.
Speaker Bio: Krishna R. Pattipati received the B. Tech. degree in electrical engineering with highest honours from the Indian Institute of Technology, Kharagpur, in 1975, and the M.S. and Ph.D. degrees in control and communication systems from UConn, Storrs, in 1977 and 1980, respectively. He was with ALPHATECH, Inc., Burlington, MA from 1980 to 1986. He has been with the Department of Electrical and Computer Engineering at UConn since 1986, where he is currently the Board of Trustees Distinguished Professor and the UTC Chair Professor in Systems Engineering. Dr. Pattipati’s research activities are in the areas of proactive decision support, uncertainty quantification, smart manufacturing, autonomy, knowledge representation, and optimizationbased learning and inference. A common theme among these applications is that they are characterized by a great deal of uncertainty, complexity, and computational intractability. He is a cofounder of Qualtech Systems, Inc., a firm specializing in advanced integrated diagnostics software tools (TEAMS, TEAMSRT, TEAMSRDS, TEAMATE, PackNGo), and serves on the board of Aptima, Inc.
Dr. Pattipati was selected by the IEEE Systems, Man, and Cybernetics (SMC) Society as the Outstanding Young Engineer of 1984, and received the Centennial Key to the Future award. He has served as the EditorinChief of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B from 1998 to 2001. He was corecipient of the Andrew P. Sage Award for the Best SMC Transactions Paper for 1999, the Barry Carlton Award for the Best AES Transactions Paper for 2000, the 2002 and 2008 NASA Space Act Awards for "A Comprehensive Toolset for Modelbased Health Monitoring and Diagnosis," and “Realtime Update of FaultTest Dependencies of Dynamic Systems: A Comprehensive Toolset for ModelBased Health Monitoring and Diagnostics”, the 2003 AAUP Research Excellence Award and the 2005 School of Engineering Teaching Excellence Award at the University of Connecticut at UCONN. He is an elected Fellow of IEEE for his contributions to discreteoptimization algorithms for largescale systems and team decisionmaking, and is a member of the Connecticut Academy of Science and Engineering.
Host Faculty: Prof. Y Narahari
 Series: Department Seminar Title: Data Driven Precondition Inference for Compiler Optimizations  Speaker: Dr. Santosh Nagarakatte
Assistant Professor
Department of Computer Science, Rutgers University  Date and Time: Thursday, August 03, 2017, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Peephole optimizations are a common source of compiler bugs.
Compiler developers typically transform an incorrect peephole
optimization into a valid one by strengthening the precondition. This
process is challenging and tedious. In this talk, I will describe our
new datadriven approach for inferring preconditions for peephole
optimizations expressed in Alive. Our main idea is to generate positive
and negative examples for an optimization in a context without tests,
enumerate predicates on demand, and learn a set of predicates that
separate the positive and negative examples. We repeat this process
until we are able to find a precondition that ensures the validity of
the optimization. Our prototype, ALIVEINFER, reports both a weakest
precondition and a set of succinct partial preconditions to the
developer. Our prototype generates preconditions that are weaker than
LLVM’s preconditions for majority of the optimizations in the Alive
suite. The talk will also demonstrate the applicability of this
technique to generalize optimization patterns generated by Souper, an
LLVM IR–based superoptimizer.
Speaker Bio: Santosh Nagarakatte is an Assistant Professor of Computer Science
at Rutgers University. He obtained his PhD from the University of
Pennsylvania in 2012. His research interests are in HardwareSoftware
Interfaces spanning Programming Languages, Compilers, Software
Engineering, and Computer Architecture. His papers have been selected as
IEEE MICRO TOP Picks papers of computer architecture conferences in 2010
and 2013. He has received the NSF CAREER Award in 2015, ACM SIGPLAN PLDI
2015 Distinguished Paper Award, and ACM SIGSOFT ICSE 2016 Distinguished
Paper Award for his research on LLVM compiler verification. His papers
have been selected as the 2016 SIGPLAN Research Highlights Paper and
2017 Communication of the ACM Research Highlights Paper.
Host Faculty: Prof. Vinod Ganapathy
 Series: Department Seminar Title: The program induction route to explainable AI  Speaker: Prof. Subramanian Ramamoorthy
 Date and Time: Wednesday, August 02, 2017, 11:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract The confluence of advances in diverse areas including machine learning, large scale
computing and reliable commoditized hardware have brought autonomous robots to the
point where they are poised to be genuinely a part of our daily lives. Some of the
application areas where this seems most imminent, e.g., autonomous vehicles, also
bring with them stringent requirements regarding safety, explainability and
trustworthiness. These needs seem to be at odds with the ways in which recent
successes have been achieved, e.g., with endtoend learning. In this talk, I will
try to make a case for an approach to bridging this gap, through the use of programmatic
representations that intermediate between opaque but efficient learning methods and
other techniques for reasoning that benefit from ‘symbolic’ representations.
I will start by attempting to frame the overall problem, drawing on some of the
motivations of the DARPA Explainable AI program (under the auspices of which we are
starting a new project, COGLE) and on extant ideas regarding safety and dynamical
properties in the control theorists’ toolbox.
Our first result will be based on a framework for Grounding and Learning Instances
through Demonstration and Eye tracking (GLIDE). The problem here is to learn the
mapping between abstract plan symbols and their physical instances in the environment,
i.e., physical symbol grounding, starting from crossmodal input provides the combination
of high level task descriptions (e.g., from a natural language instruction) and a
detailed video or joint angles signal. This problem is formulated in terms of a
probabilistic generative model and addressed using an algorithm for computationally
feasible inference to associate traces of task demonstration to a sequence of fixations
which we call fixation programs.
The second related result more directly addresses the issue of inferring processes that
underlie observed blackbox phenomena, in the form of causal mechanisms. We propose to
learn highlevel programs in order to represent abstract models, which capture the
invariant structure in the observed data. We introduce the pimachine (programinduction
machine) – an architecture able to induce interpretable LISPlike programs from observed
data traces. We propose an optimization procedure for program learning based on back
propagation, gradient descent and A* search. We apply the proposed method to problems
including system identification, explaining the behavior of a DQN agent and explaining
human demonstrations in planned activities. Our results show that the pimachine can
efficiently induce interpretable programs from individual data traces.
Speaker Bio: Prof. Subramanian Ramamoorthy is a Reader (Associate Professor) in the School of Informatics, University of Edinburgh, where he has been on the faculty since 2007. He is a Coordinator of the EPSRC Robotarium Research Facility, Executive Committee Member for the Edinburgh Centre for Robotics and at the Bayes Centre. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh, and has been a Visiting Professor at Stanford University and the University of Rome “La Sapienza”. His research focus is on robot learning and decisionmaking under uncertainty, addressed through a combination machine learning techniques with emphasis on issues of transfer, online and reinforcement learning as well as new representations and analysis techniques based on geometric/topological abstractions.
Host Faculty: Aditya Kanade
 Series: Ph.D. Colloquium Title: Geometric and Topological Methods for Biomolecular Visualization  Speaker: Mr. Talha Bin Masood
PhD student at CSA department  Faculty Advisor: Prof. Vijay Natarajan
 Date and Time: Monday, July 31, 2017, 11:00 AM
 Venue: CSA Multimedia Class (Room No. 252, First Floor)
Abstract In recent years, techniques from the field of scientific visualization and computational geometry
have increasingly found application in the study of biomolecular structure. In this thesis, we focus
on the problems related to identification and visualization of cavities and channels in proteins. In
the first part of the thesis, we describe two methods: one for extraction and visualization of
biomolecular channels, and the other for extraction of cavities in uncertain data. We also describe
the two software tools based on the proposed methods, called ChExVis and RobustCavities. In the
second part of the thesis, we describe efficient GPU based parallel algorithms for two geometric
structures widely used in the study of biomolecules. One of the structures we discuss is discrete
Voronoi diagram which finds applications in channel visualization, while the other structure
is alpha complex which is extremely useful in studying geometric and topological properties of biomolecules.
In this talk, we will focus on the software tool called ChExVis designed for extraction and
visualization of channels in biomolecules. We will describe the alpha complex based framework
for extraction of geometrically feasible channels. In addition, we will present novel ways of
visualizing the aminoacids lining the channels together with their physicochemical properties.
We will discuss a few case studies that demonstrate the utility of the proposed method. Secondly,
we will describe a GPU based algorithm for the computation of the alpha complex, a subcomplex of
the Delaunay triangulation. The algorithm exploits the knowledge of typical distribution and sizes
of atoms in biomolecules to achieve speedup of up to 13x over the stateoftheart algorithm.
Speaker Bio: Talha is a Ph.D. candidate in Computer Science at the Dept. of Computer Science and Automation,
Indian Institute of Science, Bangalore. He received B.Tech. degree from Aligarh Muslim University,
and M.E. degree from Indian Institute of Science, both in Computer Science and Engineering. His
research interests include Scientific visualization, Computational geometry, Computational topology,
and their applications to structural biology.
 Series: Ph.D. Colloquium Title: Program analyses to support memorysaving refactorings in Java programs  Speaker: Girish Maskeri Rama
 Faculty Advisor: K. V. Raghavan
 Date and Time: Friday, July 28, 2017, 11:30 AM
 Venue: CSA Multimedia Class (Room No. 252, First Floor)
Abstract Software commonly consumes unexpectedly high amounts of memory, frequently
due to programming idioms that are used to make software more reliable,
maintainable and understandable. In the case of modern objectoriented
systems this problem is partly due to creation of large numbers of
coexisting *isomorphic* objects. Intuitively, two objects are
isomorphic if they are of the same type, have identical values in
corresponding primitive fields, and are such that corresponding reference
fields themselves point to isomorphic objects. In other words, the portions
of memory rooted at the two objects are isomorphic shapewise as well as
valueswise. A significant reduction in heap usage can therefore be
achieved if the code is refactored to *deduplicate* or *share*
objects whenever possible instead of always creating distinct but possibly
isomorphic objects. Such a refactoring, which employs a cache to keep track
of objects created so far and to share them, is termed as objectsharing
refactoring. In practice, objectsharing refactoring is commonly used, as
it directly impacts the attributes of software that end users care about 
memory utilization and performance. However, manual refactoring is tedious
and error prone.
To support objectsharing refactoring we have (1) designed and implemented
an approach to estimate memorysavings potential due to this refactoring,
(2) espoused the idea of *full initialization points* of objects, and a
static analysis to identify these points, and (3) proposed a scalable
refinementbased pointsto analysis for verifying the safety of
objectsharing refactoring, but which has general applicability to several
other verification tasks as well.
(1) We present a dynamic analysis technique for *estimating* for all the
allocation sites in a program, for a given input, the reduction in heap
memory usage (in bytes, or as a percentage) to be obtained by employing
object sharing at each site. The quantitative estimates produced by our
technique of a userobservable benefit (i.e., actual memory savings) make
it easier for developers to select sites to refactor. Experimentation with
our estimation tool on real Java programs indicate that nearly all
applications have potential for reduction of memory usage by object
sharing, with up to 37% estimated reduction in heap footprint in the best
case.
(2) We define a novel concept termed fullinitialization points (FIPs) to
characterize the points in the program where objects allocated at any
chosen allocation site become fully initialized. This notion supports
objectsharing refactoring as follows  the FIPs are good candidate program
points where the cache lookup can be performed to find isomorphic objects
and share them. Additionally, we argue that FIPs are useful in the context
of other allocationsite refactorings, such as immutability refactoring,
and refactoring to the builder pattern. We present a novel and
conservative static analysis to detect FIPs for a given allocation site. By
introducing code to cache and share objects at the FIPs suggested by our
analysis, objectsharing refactoring was able to obtain a mean memory
savings of 11.4% on a set of real Java benchmarks. Immutability refactoring
guided by our analysis achieved a mean runtime speedup of 1.6X compared to
performing the same refactoring using a baseline approach that did not make
use of FIPs.
(3) A standard pointsto analysis approach, namely, the object sensitivity
approach, uses an equal level of precision to represent symbolic objects
allocated at all allocation sites in a program. This approach does not
scale to large programs unless low levels of precision are used. We show
how to use a user's query of interest to identify a limited set of
allocation sites that need a higher level of precision, and use higher
precision only at these sites. We have implemented our
approach. Experiments using our implementation show that for the task of
checking safety of applying objectsharing refactoring at each site, our
approach gives mean improvement in precision of 4.4% over the previous
approach, on a set of real benchmarks. For another wellknown verification
task, namely downcast safety analysis, our approach results in mean
precision improvement of 11.9% over the previous approach.
Speaker Bio: Girish Rama is employed at Infosys Ltd., and is a PhD student at CSA, IISc. His research interests are in software refactoring, software modularity and metrics, and program analysis techniques.
 Series: Ph.D. Thesis Defense Title: Power Issues in SoCs: Power Aware DFT Architecture and Power Estimation  Speaker: Mr. Jaynarayan Thakurdas Tudu
Ph.D student
Dept. of CSA, IISc  Faculty Advisor: Prof. Matthew Jacob Thazhuthaveetil
 Date and Time: Friday, July 28, 2017, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Test power, data volume, and test time have been longstanding problems for sequential scan
based testing of systemonchip (SoC) design. The modern SoCs fabricated at lower technology
nodes are complex in nature, the transistor count is as large as billions of gate for some of
the microprocessors. The design complexity is further projected to increase in the coming years
in accordance with Moore’s law. The larger gate count and integration of multiple functionalities
are the causes for higher test power dissipation, test time and data volume. The dynamic power
dissipation during scan testing, i.e. during scan shift, launch and response capture, are major
concerns for reliable as well as cost effective testing. Excessive average power dissipation leads
to a thermal problem which causes burnout of the chip during testing. Peak power on other hand
causes test failure due to power induced additional delay. The test failure has direct impact on
yield. The test power problem in modern 3D stacked based IC is even a more serious issue. Estimating
the worst case functional power dissipation is yet another great challenge. The worst case functional
power estimation is necessary because it gives an upper bound on the functional power dissipation
which can further be used to determine the safe power zone for the test.
Several solutions in the past have been proposed to address these issues. In this thesis
we have three major contributions: 1) Sequential scan chain reordering, and 2) JScanan
alternative Jointscan DFT architecture to address primarily the test power issues along
with test time and data volume, and 3) an integer linear programming methodology to address
the power estimation problem. In order to reduce test power during shift, we have proposed
a graph theoretic formulation for scan chain reordering and for optimum scan shift operation.
For each formulation a set of algorithms is proposed. The experimental results on ISCAS89
benchmark circuit show a reduction of around 25% and 15% in peak power and scan shift time respectively.
In order to have a holistic DFT architecture which could solve test power, test time,
and data volume problems, a new DFT architecture called Jointscan (JScan) have been
developed. In JScan we have integrated the serial and random access scan architectures
in a systematic way by which the JScan could harness the respective advantages from
each of the architectures. The serial scan architecture suffers from test power,
test time, and data volume problems. However, the serial scan is simple in terms
of its functionality and is cost effective in terms of DFT circuitry. Whereas, the
random access scan architecture is opposite to this; it is power efficient and it
takes lesser time and data volume compared to serial scan. However, the random
access scan occupies larger DFT area and introduces routing congestion. Therefore,
we have proposed a methodology to realize the JScan architecture as an efficient
alternative for standard serial and random access scan. Further, the JScan
architecture is optimized and it resulted into a 2Mode 2MJscan Jointscan
architecture. The proposed architectures are experimentally verified on larger
benchmark circuits and compared with existing state of the art DFT architectures.
The results show a reduction of 50% to 80% in test power and 30% to 50% in test
time and data volume. The proposed architectures are also evaluated for routing
area minimization and we obtained a saving of around 7% to 15% of chip area.
Estimating the worst case functional power being a challenging problem, we have
proposed a binary integer linear programming (BILP) based methodology. Two different
formulations have been proposed considering the different delay models namely
zerodelay and unitdelay. The proposed methodology generates a pair or input vectors
which could toggle the circuit to dissipate worst power. The BILP problems are solved
using CPLEX solver for ISCAS85 combinational benchmark circuits. For some of the
circuits, the proposed methodology provided the worst possible power dissipation i.e.
80 to 100% toggling in nets.

