Biochemistry Seminar: Tristan Bepler, "Learning to simultaneously locate and classify particles in cryo-electron micrographs without supervision"

Dates
Wed, Dec 07, 2022 - 12:00 PM — Wed, Dec 07, 2022 - 01:00 PM
Admission Fee
Free. Coffee & tea will be available in the ASRC Cafe at 11:30 AM
Event Address
The speaker will be in-person:
ASRC Main Auditorium
85 Saint Nicholas Terrace
Current CUNY Cleared4 Pass is required for entrance; masks are optional.
Phone Number
212-650-8803
Event Location
The seminar can also be viewed via Zoom.
Zoom link: https://gc-cuny.zoom.us/j/4954048198?pwd=eVlkMFdHcjV6d3pkYzB4V2VtbHJGdz09
Event Details

Tristan Bepler, Group Leader, Simons Machine Learning Center, New York Structural Biology Center, will give a talk on, "Learning to simultaneously locate and classify particles in cryo-electron micrographs without supervision."

ABSTRACT

In many imaging modalities, objects of interest can occur in a variety of locations and poses (i.e. are subject to translations and rotations in 2d or 3d), but the location and pose of an object does not change its semantics (i.e. the object's essence). That is, the specific location and rotation of an airplane in satellite imagery, or the 3d rotation of a chair in a natural image, or the rotation of a particle in a cryo-electron micrograph, do not change the intrinsic nature of those objects. Here, we consider the problem of learning semantic representations of objects that are invariant to pose and location in a fully unsupervised manner. We address shortcomings in previous approaches to this problem by introducing TARGET-VAE, a translation and rotation group-equivariant variational autoencoder framework. TARGET-VAE combines three core innovations: 1) a rotation and translation group-equivariant encoder architecture, 2) a structurally disentangled distribution over latent rotation, translation, and a rotation-translation-invariant semantic object representation, which are jointly inferred by the approximate inference network, and 3) a spatially equivariant generator network. In comprehensive experiments, we show that TARGET-VAE learns disentangled representations without supervision that significantly improve upon, and avoid the pathologies of, previous methods. When trained on images highly corrupted by rotation and translation, the semantic representations learned by TARGET-VAE are similar to those learned on consistently posed objects, dramatically improving clustering in the semantic latent space. Furthermore, TARGET-VAE is able to perform remarkably accurate unsupervised pose and location inference. We expect methods like TARGET-VAE will underpin future approaches for unsupervised object generation, pose prediction, and object detection.

Back to Departmental Calendar Back to calendar of events