We present a variational autoencoder (ProteinVAE) that can generate synthetic viral vector serotypes without epitopes for pre-existing neutralizing antibodies. A pre-trained protein language model was incorpora
Learning and disentangling coherent latent representations of variational autoencoders (VAEs) have recently attracted widespread attention. However, the latent space of the VAE model is constrained by the prior distribution, which can hinder the latent variables from accurately capturing semantic information...
Variational autoencoders (VAEs) play an important role in high-dimensional data generation based on their ability to fuse the stochastic data representation with the power of recent deep learning techniques. The main advantages of these types of generators lie in their ability to encode the informa...
This paper proposes Dirichlet Variational Autoencoder (DirVAE) using a Dirichlet prior. To infer the parameters of DirVAE, we utilize the stochastic gradient method by approximating the inverse cumulative distribution function of the Gamma distribution, which is a component of the Dirichlet distribution...
variational autoencoder (ProteinVAE) that can generate synthetic viral vector serotypes without epitopes for pre-existing neutralizing antibodies. A pre-trained protein language model was incorporated into the encoder to improve data efficiency, and deconvolution-based upsampling was used for decoding to ...
What are variational autoencoders used for? VAEs have three fundamental purposes: create new data, identify anomalies in data, and removenoisy or unwanted data. These three capabilities might not sound impressive, but they make VAEs well suited for numerous powerful applications, such as...
Lecture 4 Latent Variable Models -- Variational AutoEncoder (VAE) While the old way of doing statistics used to be mostly concerned with inferring what has happened, modern statistics is more concerned with predicting what will happen, and many practical machine learning applications rely on it. ...
Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they...
Therefore, in this paper, we propose a new deep learning method to learn the node embedding for bipartite networks based on the widely used autoencoder framework. Moreover, we carefully devise a node-level triplet including two types of nodes to assign the embedding by integrating the local ...
This study introduces FactorVQVAE, the first integration of the Vector Quantized Variational Autoencoder (VQVAE) into factor modeling, providing a novel framework for predicting cross-sectional stock returns and constructing systematic investment portfolios. The model employs a two-stage architecture to ...