UC Berkeley EECS Technical ReportsThe UC Berkeley EECS Technical Memorandum Series provides a dated archive of EECS research. It includes Ph.D. theses and master's reports as well as technical documents that complement traditional publication media such as journals. For example, technical reports may document work in progress, early versions of results that are eventually published in more traditional media, and supplemental information such as long proofs, software documentation, code listings, or elaborated examples.http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019-11-20T02:07:04Z2019-11-20T02:07:04ZenQueries on Compressed DataAnurag Khandelwalhttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-141.html2019-11-02T07:00:00Z2019-11-02T07:00:00Z<p>Queries on Compressed Data</p>
<p>
Anurag Khandelwal</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-141<br>
November 2, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-141.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-141.pdf</a></p>
<p>Low-latency, high-throughput systems for serving interactive queries are crucial to today's web services. Building such systems for today's web services is challenging due to the massive volumes of data they must cater to, and the requirement for supporting sophisticated queries (e.g., searches, filters, aggregations, regular expression matches, graph queries, etc.). Several recent approaches have highlighted the importance of in-memory storage for meeting the low-latency and high-throughput requirements, but these approaches are unable to sustain this performance when the data grows larger than DRAM capacity. Existing systems thus achieve these goals either by assuming large enough DRAM (too expensive) or by supporting only a limited set of queries (e.g., key-value stores).
<p>In this dissertation, we explore algorithmic and data structure-driven solutions to these system design problems. We present Succinct, a distributed data store that addresses these challenges using a fundamentally new approach --- executing a wide range of queries (e.g., search, random access, range, wildcard) <em>directly</em> on a compressed representation of the input data --- thereby enabling efficient execution of queries on data sizes much larger than DRAM capacity. We then describe BlowFish, a system that builds on Succinct to enable a dynamic storage-performance tradeoff in data stores, providing applications the flexibility to modify the storage and performance fractionally, just enough to meet the desired goals. Finally, we explore approaches that enable even richer query semantics on compressed data, including graph queries using ZipG, a memory efficient graph store, and regular expression queries using Sprint, a query rewriting technique.</p></p>
<p><strong>Advisor:</strong> Ion Stoica</p>2019-11-02T07:00:00ZHaptic Perception of Liquids Enclosed in ContainersCarolyn Chenhttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-140.html2019-10-25T07:00:00Z2019-10-25T07:00:00Z<p>Haptic Perception of Liquids Enclosed in Containers</p>
<p>
Carolyn Chen</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-140<br>
October 25, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-140.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-140.pdf</a></p>
<p>Service robots will require several important manipulation skills, including the ability to accurately measure and pour liquids. Prior work on robotic liquid pouring has primarily focused on visual techniques for sensing liquids, but these techniques fall short when liquids are obscured by opaque or closed containers. This paper proposes a complementary method for liquid perception via haptic sensing. The robot moves a container through a series of tilting motions and observes the wrenches induced at the manipulator’s wrist by the liquid’s shifting center of mass. That data is then analyzed with a physics-based model to estimate the liquid’s mass and volume. In experiments, this method achieves error margins of less than 1g and 2mL for an unknown liquid in a 600mL cylindrical container. The model can also predict the viscosity of ﬂuids, which can be used for classifying water, oil, and honey with an accuracy of 98%. The estimated volume is used to precisely pour 100mL of water with less than 4% average error. This work will be presented and published through the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Macau.</p>
<p><strong>Advisor:</strong> Ruzena Bajcsy</p>2019-10-25T07:00:00ZDemocratizing Web Automation: Programming for Social Scientists and Other Domain ExpertsSarah Chasinshttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-139.html2019-10-22T07:00:00Z2019-10-22T07:00:00Z<p>Democratizing Web Automation: Programming for Social Scientists and Other Domain Experts</p>
<p>
Sarah Chasins</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-139<br>
October 22, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-139.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-139.pdf</a></p>
<p><strong>Advisor:</strong> Rastislav Bodik and Björn Hartmann</p>2019-10-22T07:00:00ZDeep learing for single-shot autofocus microscopyHenry PinkardZachary PhillipsArman BabakhaniDaniel FletcherLaura Wallerhttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-138.html2019-10-16T07:00:00Z2019-10-16T07:00:00Z<p>Deep learing for single-shot autofocus microscopy</p>
<p>
Henry Pinkard, Zachary Phillips, Arman Babakhani, Daniel Fletcher and Laura Waller</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-138<br>
October 16, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-138.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-138.pdf</a></p>
<p>Maintaining an in-focus image over long time scales is an essential and non-trivial task for a variety of microscopy applications. Here, we describe a fast and robust auto-focusing method that is compatible with a wide range of existing microscopes. It requires only the addition of one or a few off-axis illumination sources (e.g. LEDs), and can predict the focus correction from a single image with this illumination. We designed a neural network architecture, the fully connected Fourier neural network (FCFNN), that exploits an understanding of the physics of the illumination in order to make accurate predictions with 2-3 orders of magnitude fewer learned parameters and less memory usage than existing state-of-the-art architectures, allowing it to be trained without any specialized hardware. We provide an open-source implementation of our method, in order to enable fast and inexpensive autofocus compatible with a variety of microscopes.</p>
<p><strong>Advisor:</strong> Laura Waller</p>2019-10-16T07:00:00ZClosed Loop Digital LDO Linear ControllerZinia Tulihttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-137.html2019-10-02T07:00:00Z2019-10-02T07:00:00Z<p>Closed Loop Digital LDO Linear Controller</p>
<p>
Zinia Tuli</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-137<br>
October 2, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-137.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-137.pdf</a></p>2019-10-02T07:00:00ZFaster Algorithms and Graph Structure via Gaussian EliminationAaron Schildhttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-136.html2019-09-19T07:00:00Z2019-09-19T07:00:00Z<p>Faster Algorithms and Graph Structure via Gaussian Elimination</p>
<p>
Aaron Schild</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-136<br>
September 19, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-136.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-136.pdf</a></p>
<p>Graph partitioning has played an important role in theoretical computer science, particularly in the design of approximation algorithms and metric embeddings. In some of these applica- tions, fundamental tradeoffs in graph partitioning prevented further progress. To overcome these barriers, we consider partitions of certain derived graphs of an undirected graph G ob- tained by applying Gaussian elimination to the Laplacian matrix of G to eliminate vertices from G. We use this technique and others to obtain new results on the following fronts: Cheeger’s Inequality: Cheeger’s inequality shows that any undirected graph G with min- imum √ nonzero normalized Laplacian eigenvalue λ G has a cut with conductance at most O( λ G ). Qualitatively, Cheeger’s inequality says that if the relaxation time of a graph is high, there is a cut that certifies this. However, there is a gap in this relationship, as cuts can have conductance as low as Θ(λ G ). To better approximate the relaxation time of a graph, we consider a more general object. Specifically, instead of bounding the mixing time with cuts, we bound it with cuts in graphs obtained via Gaussian elimination from G. Combinatorially, random walks in these graphs are equivalent in distribution to random walks in G restricted to a subset of its vertices. As a result, all Schur complement cuts have conductance at least Ω(λ G ). We show that unlike with cuts, this inequality is tight up to a constant factor. Specifically, there is a derived graph containing a cut with conductance at most O(λ G ). Oblivious Routing: We show that in any graph, the average length of a flow path in an electrical flow between the endpoints of a random edge is O(log 2 n). This is a consequence of a more general result which shows that the spectral norm of the entrywise absolute value of the transfer impedance matrix of a graph is O(log 2 n). This result implies a simple oblivious routing scheme based on electrical flows in the case of transitive graphs. Random Spanning Tree Sampling: We give an m 1+o(1) β o(1) -time algorithm for generat- ing uniformly random spanning trees in weighted graphs with max-to-min weight ratio β. In2 the process, we illustrate how fundamental tradeoffs in graph partitioning can be overcome by eliminating vertices from a graph using Schur complements of the associated Laplacian matrix. Our starting point is the Aldous-Broder algorithm, which samples a random spanning tree using a random walk. As in prior work, we use fast Laplacian linear system solvers to shortcut the random walk from a vertex v to the boundary of a set of vertices assigned to v called a “shortcutter.” We depart from prior work by introducing a new way of employing Laplacian solvers to shortcut the walk. To bound the amount of shortcutting work, we show that most random walk steps occur far away from an unvisited vertex. We apply this observation by charging uses of a shortcutter S to random walk steps in the Schur complement obtained by eliminating all vertices in S that are not assigned to it.</p>
<p><strong>Advisor:</strong> Satish Rao</p>2019-09-19T07:00:00ZAdvances in Machine Learning: Nearest Neighbour Search, Learning to Optimize and Generative ModellingKe Lihttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-135.html2019-09-05T07:00:00Z2019-09-05T07:00:00Z<p>Advances in Machine Learning: Nearest Neighbour Search, Learning to Optimize and Generative Modelling</p>
<p>
Ke Li</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-135<br>
September 5, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-135.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-135.pdf</a></p>
<p>(This is a condensed version. See dissertation for the full version.)
<p>Machine learning is the embodiment of an unapologetically data-driven philosophy that has increasingly become one of the most important drivers of progress in AI and beyond. Existing machine learning methods, however, entail making trade-offs in terms of computational efficiency, modelling flexibility and/or formulation faithfulness. In this dissertation, we will cover three different ways in which limitations along each axis can be overcome, without compromising on other axes. </p>
<p><strong>Computational Efficiency</strong> </p>
<p>We start with limitations on computational efficiency. Many large-scale machine learning methods require performing nearest neighbour search under the hood. Unfortunately, all exact algorithms suffer from either the curse of ambient dimensionality, or of intrinsic dimensionality. In fact, despite 40+ years of research, no exact algorithm can run faster than naive exhaustive search when the intrinsic dimensionality is high, which is almost certainly the case in machine learning. </p>
<p>We introduce a new family of exact algorithms, known as Dynamic Continuous Indexing, which overcomes both the curse of ambient dimensionality and of intrinsic dimensionality. The key insight is that existing methods require distances between each point and a query to be approximately preserved in the data structure, whereas a method that only approximately preserves the *ordering* of nearby points relative to distant points would suffice. In practice, our algorithm achieves a 14 - 116x speedup and a 21x reduction in memory consumption compared to locality-sensitive hashing (LSH). </p>
<p><strong>Modelling Flexibility</strong> </p>
<p>Next we move onto probabilistic modelling, which is critical to realizing one of the central objectives of machine learning, which is to model the uncertainty that is inherent in prediction. There is often a tradeoff between modelling flexibility and computational efficiency: simple models can often be learned straightforwardly and efficiently but are not expressive; complex models are expressive, but in general cannot be learned both exactly and efficiently. </p>
<p>Implicit probabilistic models, like generative adversarial nets (GANs), aim to get around this tradeoff by using a highly expressive function, e.g.: a neural net, in its sampling procedure. Unfortunately, GANs fall short of learning the underlying distribution because of mode collapse, i.e.: they can effectively ignore some arbitrary subset of the training data. We argue this arises from the direction in which generated samples are matched to the real data - by inverting this direction, we devise a new method, known as Implicit Maximum Likelihood Estimation (IMLE), which fundamentally overcomes mode collapse. This can be shown to be equivalent to maximizing a lower bound on the log-likelihood. </p>
<p><strong>Formulation Faithfulness</strong> </p>
<p>Finally we introduce a novel formulation that can enable the automatic discovery of new iterative gradient-based optimization algorithms, which have become the workhorse of modern machine learning. This effectively allows us to apply machine learning to improve machine learning, which has been a dream of machine learning researchers since the early days of the field. The key challenge, however, is that it is unclear how to represent a complex object like an algorithm in a way that is amenable to machine learning. </p>
<p>We get around this issue by observing that an optimization algorithm can be uniquely characterized by its update formula - different iterative optimization algorithms only differ in their choice of the update formula. Therefore, if we can learn the update formula, we can then automatically discover new optimization algorithms. We approximate the update formula using a neural net - then by learning the parameters of the neural net, we can search over different yet-to-be-discovered optimization algorithms efficiently.</p></p>
<p><strong>Advisor:</strong> Jitendra Malik</p>2019-09-05T07:00:00ZTripAware: Separate Related Works DocumentJesse ZhangJack SullivanVasudev Venkatesh P. B.Kyle TseAndy YanJohn LeydenKalyanaraman ShankariRandy H. Katzhttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-134.html2019-08-30T07:00:00Z2019-08-30T07:00:00Z<p>TripAware: Separate Related Works Document</p>
<p>
Jesse Zhang, Jack Sullivan, Vasudev Venkatesh P. B., Kyle Tse, Andy Yan, John Leyden, Kalyanaraman Shankari and Randy H. Katz</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-134<br>
August 30, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-134.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-134.pdf</a></p>
<p>This techreport, written for our publication TripAware: Emotional and Informational Approaches to Encourage Sustainable Transportation via Mobile Applications in ACM Buildsys 2019, contains an analysis of related work not mentioned in the paper due to the page limit requirements for Buildsys.</p>2019-08-30T07:00:00ZAlgorithmic ImprovisationDaniel J. Fremonthttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-133.html2019-08-27T07:00:00Z2019-08-27T07:00:00Z<p>Algorithmic Improvisation</p>
<p>
Daniel J. Fremont</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-133<br>
August 27, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-133.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-133.pdf</a></p>
<p>The increasing use of autonomy for safety-critical tasks, from operating power grids to driving cars, has led to an acute need for reliable and secure systems. The ideal approach to obtaining rigorous reliability guarantees is to automatically construct systems from formal specifications using
<em>correct-by-construction synthesis</em>. A new dimension in this area is the synthesis of
<em>randomized</em> systems, which, as we show in this thesis, enables a broad range of new applications in safe autonomy and other fields. This is because randomness can provide several crucial benefits to a system, including
<em>robustness</em>,
<em>variety</em>, and
<em>unpredictability</em>. For example, a robot following a random route can be harder for an adversary to intercept, making the system more secure; a synthetic data generator for a machine learning algorithm can use randomness to produce diverse training data, making the ML model more robust. The key question, then, is
<em>how can we automatically synthesize a system with random behavior but formal guarantees?</em> This thesis proposes a theory of
<em>algorithmic improvisation</em> enabling the correct-by-construction synthesis of randomized systems, and explores its applications to safe autonomy.
<p>The first part of the thesis studies the theory of algorithmic improvisation in depth. We begin by introducing the core computational problem of <em>control improvisation (CI)</em>, which requires constructing an <em>improviser</em>, a randomized algorithm generating sequences of symbols subject to hard, soft, and randomness constraints. We develop a general approach to building improvisers, instantiate it to obtain efficient synthesis algorithms in some cases, and prove hardness results for others. Next, we generalize CI to the <em>reactive control improvisation (RCI)</em> problem, which allows us to synthesize <em>open</em> systems that interact with an adversarial environment. We again give efficient algorithms for constructing <em>improvising strategies</em> in some useful cases, and hardness results in others. Finally, we investigate <em>language-based improvisation</em>, using a probabilistic programming language (PPL) to provide greater control over the distribution of the improviser. We design a <em>domain-specific PPL</em>, Scenic, for defining distributions over <em>scenes</em>, configurations of physical objects and agents. Scenic significantly decreases the effort required to specify the complex environments of systems like self-driving cars. </p>
<p>In the second part of the thesis, we demonstrate how algorithmic improvisation can help with the design, analysis, and testing of autonomous systems. First, we show how to synthesize <em>randomized planners for mobile robots</em>, for example a patrolling security robot which uses randomness to make its route less predictable while still guaranteeing safety. Next, we study using algorithmic improvisation to create <em>human models</em> with realistic stochasticity and tunable behavior, a vital prerequisite for the design of a system which interacts with people. Finally, we propose a methodology for using language-based improvisation to train, test, and debug cyber-physical systems like autonomous cars by <em>generating synthetic data</em> from customizable distributions. We apply our methodology to an industrial neural network, finding bugs in the system, eliminating them through retraining, and boosting the performance of the network beyond what could be achieved with prior techniques by using Scenic to design training sets in a more intelligent way. </p>
<p>In summary, algorithmic improvisation is a mathematical framework for synthesizing randomized systems satisfying formal specifications. It has already proved useful in a wide range of fields, including robotics, cyber-physical systems, computer music, and machine learning, and shows promise in a variety of further applications to the design of secure and dependable systems.</p></p>
<p><strong>Advisor:</strong> Sanjit Seshia</p>2019-08-27T07:00:00ZLearning to Generalize via Self-Supervised PredictionDeepak Pathakhttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-132.html2019-08-26T07:00:00Z2019-08-26T07:00:00Z<p>Learning to Generalize via Self-Supervised Prediction</p>
<p>
Deepak Pathak</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-132<br>
August 26, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-132.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-132.pdf</a></p>
<p>Generalization, i.e., the ability to adapt to novel scenarios, is the hallmark of human intelligence. While we have systems that excel at recognizing objects, cleaning floors, playing complex games and occasionally beating humans, they are incredibly specific in that they only perform the tasks they are trained for and are miserable at generalization. Could optimizing towards fixed external goals be hindering the generalization instead of aiding it? In this thesis, we present our initial efforts toward endowing artificial agents with a human-like ability to generalize in diverse scenarios. The main insight is to first allow the agent to learn general-purpose skills in a completely self-supervised manner, without optimizing for any external goal.
<p>To be able to learn on its own, the claim is that an artificial agent must be embodied in the world, develop an understanding of its sensory input (e.g., image stream) and simultaneously learn to map this understanding to its motor outputs (e.g., torques) in an unsupervised manner. All these considerations lead to two fundamental questions: how to learn rich representations of the world similar to what humans learn?; and how to re-use such a representation of past knowledge to incrementally adapt and learn more about the world similar to how humans do? We believe prediction is the key to this answer. We propose generic mechanisms that employ prediction as a supervisory signal in allowing the agents to learn sensory representations as well as motor control. These two abilities equip an embodied agent with a basic set of general-purpose skills which are then later repurposed to perform complex tasks. </p>
<p>We discuss how this framework can be instantiated to develop curiosity-driven agents (virtual as well as real) that can learn to play games, learn to walk, and learn to perform real-world object manipulation without any rewards or supervision. These self-supervised robotic agents, after exploring the environment, can generalize to find their way in office environments, tie knots using rope, rearrange object configuration, and compose their skills in a modular fashion.</p></p>
<p><strong>Advisor:</strong> Trevor Darrell and Alexei (Alyosha) Efros</p>2019-08-26T07:00:00ZComplex-valued Deep Learning with Applications to Magnetic Resonance Image SynthesisPat Virtuehttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-130.html2019-08-19T07:00:00Z2019-08-19T07:00:00Z<p>Complex-valued Deep Learning with Applications to Magnetic Resonance Image Synthesis</p>
<p>
Pat Virtue</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-130<br>
August 19, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-130.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-130.pdf</a></p>
<p>Magnetic resonance imaging (MRI) has the ability to produce a series of images that each have different visual contrast between tissues, allowing clinicians to qualitatively assess pathologies that may be visible in one contrast-weighted image but not others. Unfortunately, these standard contrast-weighted images do not contain quantitative values, producing challenges for post-processing, assessment, and longitudinal studies. MR fingerprinting is a recent technique that produces quantitative tissue maps from a single pseudorandom acquisition, but it relies on computationally heavy nearest neighbor algorithms to solve the associated nonlinear inverse problem. In this dissertation, we present our deep learning methods to speed up quantitative MR fingerprinting and synthesize the standard contrast-weighted images directly from the same MR fingerprinting scan.
<p>Adapting deep learning methodologies to MR image synthesis presents two specific challenges: 1) complex-valued data and 2) the presence of noise while undersampling. </p>
<p>MRI signals are inherently complex-valued, as they are measurements of rotating magnetization within the body. However, modern neural networks are not designed to support complex values. As an example, the pervasive ReLU activation function is undefined for complex numbers. This limitation curtails the impact of deep learning for complex data applications, such as MRI, radio frequency modulation identification, and target recognition in synthetic-aperture radar images. In this dissertation, we discuss the motivation for complex-valued networks, the changes that we have made to implement complex backpropagation, and our new complex cardioid activation function that made it possible to outperform real-valued networks for MR fingerprinting image synthesis. </p>
<p>In Fourier-based medical imaging, undersampling results in an underdetermined system, in which a linear reconstruction will exhibit artifacts. Another consequence is lower signal-to-noise ratio (SNR) because of fewer acquired measurements. The coupled effects of low SNR and underdetermined system during reconstruction makes it difficult to model the signal and analyze image reconstruction algorithms. We demonstrate that neural networks trained only with a Gaussian noise model fail to process in vivo MR fingerprinting data, while our proposed empirical noise model allows neural networks to successfully synthesize quantitative images. Additionally, to better understand the impact of noise on undersampled imaging systems, we present an image quality prediction process that reconstructs fully sampled, fully determined data with noise added to simulate the SNR loss induced by a given undersampling pattern. The resulting prediction image empirically shows the effects of noise in undersampled image reconstruction without any effect from an underdetermined system, allowing MR pulse sequence and reconstruction developers to determine if low SNR, rather than the underdetermined system, is the limiting factor for a successful reconstruction.</p></p>
<p><strong>Advisor:</strong> Michael Lustig and Stella Yu</p>2019-08-19T07:00:00ZEffect of Model Dissimilarity on Learning to Communicate in a Wireless Setting with Limited InformationCaryn TranVignesh SubramanianKailas VodrahalliAnant Sahaihttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-129.html2019-08-16T07:00:00Z2019-08-16T07:00:00Z<p>Effect of Model Dissimilarity on Learning to Communicate in a Wireless Setting with Limited Information</p>
<p>
Caryn Tran, Vignesh Subramanian, Kailas Vodrahalli and Anant Sahai</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-129<br>
August 16, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-129.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-129.pdf</a></p>
<p>This work engages the problem of collaborative learning in the context of wireless communication schemes. Two agents must learn modulation and demodulation schemes that enable them to communicate with each other in the presence of an AWGN channel via reinforcement learning. Proposed and examined is the echo private preamble protocol, a communication protocol enabling two agents to learn how to communicate with little shared context. Under the echo private preamble protocol, neural network based agents to learn strategies to communicate with other neural agents as well as agents that uses a fixed standardized protocol, and agents of a different model. This work also builds iteratively on top of relaxations of this protocol to show that this information restricted protocol is comparable to ones with a larger shared context. My specific contributions lie in introducing a new model (polynomials), writing the code base, and running and analyzing the baseline experiments for the echo private preamble protocol as well as the experiments to examine the effects of learning with mismatched agents whose internal models are dissimilar.</p>
<p><strong>Advisor:</strong> Anant Sahai</p>2019-08-16T07:00:00ZUntethered Microrobots of the Rolling, Jumping & Flying kindsPalak BhushanClaire Tomlinhttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-128.html2019-08-16T07:00:00Z2019-08-16T07:00:00Z<p>Untethered Microrobots of the Rolling, Jumping & Flying kinds</p>
<p>
Palak Bhushan and Claire Tomlin</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-128<br>
August 16, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-128.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-128.pdf</a></p>
<p><strong>Advisor:</strong> Claire Tomlin</p>2019-08-16T07:00:00ZComplex-valued Deep Learning with Applications to Magnetic Resonance Image SynthesisPatrick Virtuehttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-126.html2019-08-16T07:00:00Z2019-08-16T07:00:00Z<p>Complex-valued Deep Learning with Applications to Magnetic Resonance Image Synthesis</p>
<p>
Patrick Virtue</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-126<br>
August 16, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-126.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-126.pdf</a></p>
<p>Magnetic resonance imaging (MRI) has the ability to produce a series of images that each have different visual contrast between tissues, allowing clinicians to qualitatively assess pathologies that may be visible in one contrast-weighted image but not others. Unfortunately, these standard contrast-weighted images do not contain quantitative values, producing challenges for post-processing, assessment, and longitudinal studies. MR fingerprinting is a recent technique that produces quantitative tissue maps from a single pseudorandom acquisition, but it relies on computationally heavy nearest neighbor algorithms to solve the associated nonlinear inverse problem. In this dissertation, we present our deep learning methods to speed up quantitative MR fingerprinting and synthesize the standard contrast-weighted images directly from the same MR fingerprinting scan.
<p>Adapting deep learning methodologies to MR image synthesis presents two specific challenges: 1) complex-valued data and 2) the presence of noise while undersampling. </p>
<p>MRI signals are inherently complex-valued, as they are measurements of rotating magnetization within the body. However, modern neural networks are not designed to support complex values. As an example, the pervasive ReLU activation function is undefined for complex numbers. This limitation curtails the impact of deep learning for complex data applications, such as MRI, radio frequency modulation identification, and target recognition in synthetic-aperture radar images. In this dissertation, we discuss the motivation for complex-valued networks, the changes that we have made to implement complex backpropagation, and our new complex cardioid activation function that made it possible to outperform real-valued networks for MR fingerprinting image synthesis. </p>
<p>In Fourier-based medical imaging, undersampling results in an underdetermined system, in which a linear reconstruction will exhibit artifacts. Another consequence is lower signal-to-noise ratio (SNR) because of fewer acquired measurements. The coupled effects of low SNR and underdetermined system during reconstruction makes it difficult to model the signal and analyze image reconstruction algorithms. We demonstrate that neural networks trained only with a Gaussian noise model fail to process in vivo MR fingerprinting data, while our proposed empirical noise model allows neural networks to successfully synthesize quantitative images. Additionally, to better understand the impact of noise on undersampled imaging systems, we present an image quality prediction process that reconstructs fully sampled, fully determined data with noise added to simulate the SNR loss induced by a given undersampling pattern. The resulting prediction image empirically shows the effects of noise in undersampled image reconstruction without any effect from an underdetermined system, allowing MR pulse sequence and reconstruction developers to determine if low SNR, rather than the underdetermined system, is the limiting factor for a successful reconstruction.</p></p>2019-08-16T07:00:00ZStochastic Local Search and the Lovasz Local LemmaFotios Iliopouloshttp://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-125.html2019-08-16T07:00:00Z2019-08-16T07:00:00Z<p>Stochastic Local Search and the Lovasz Local Lemma</p>
<p>
Fotios Iliopoulos</p>
<p>
EECS Department<br>
University of California, Berkeley<br>
Technical Report No. UCB/EECS-2019-125<br>
August 16, 2019</p>
<p>
<a href="http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-125.pdf">http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-125.pdf</a></p>
<p>This thesis studies randomized local search algorithms for finding solutions of constraint satisfaction problems inspired by and extending the Lovasz Local Lemma (LLL).
<p>The LLL is a powerful probabilistic tool for establishing the existence of objects satisfying certain properties (constraints). As a probability statement it asserts that, given a family of “bad” events, if each bad event is individually not very likely and independent of all but a small number of other bad events, then the probability of avoiding all bad events is strictly positive. In a celebrated breakthrough, Moser and Tardos made the LLL constructive for any product probability measure over explicitly presented variables. Specifically, they proved that whenever the LLL condition holds, their Resample algorithm, which repeatedly selects any occurring bad event and resamples all its variables according to the measure, quickly converges to an object with desired properties. </p>
<p>In this dissertation we present a framework that extends the work of Moser and Tardos and can be used to analyze arbitrary, possibly complex, focused local search algorithms, i.e., search algorithms whose process for addressing violated constraints, while local, is more sophisticated than obliviously resampling their variables independently of the current configuration. We give several applications of this framework, notably a new vertex coloring algorithm for graphs with sparse vertex neighborhoods that uses a number of colors that matches the algorithmic barrier for random graphs, and polynomial time algorithms for the celebrated (non-constructive) results of Kahn for the Goldberg-Seymour and List-Edge-Coloring Conjectures. </p>
<p>Finally, we introduce a generalization of Kolmogorov’s notion of commutative algorithms, cast as matrix commutativity, and show that their output distribution approximates the so-called “LLL-distribution”, i.e., the distribution obtained by conditioning on avoiding all bad events. This fact allows us to consider questions such as the number of possible distinct final states and the probability that certain portions of the state space are visited by a local search algorithm, extending existing results for the Moser-Tardos algorithm to commutative algorithms.</p></p>
<p><strong>Advisor:</strong> Alistair Sinclair</p>2019-08-16T07:00:00Z