What are pLMs?
What are reliability scores?
What are substitution matrices?
What are the differences between BLOSUM and PSSM?
What is the advantage of LSTMs over CNNs?
What is the advantage of embeddings over MSAs?
What are PSSMs and how are they computed?
What are three advantages of CNNs over FNNs?
How can sequencing mistakes challenge per-protein predictions?
Sequencing mistakes challenge per-protein predictions because a prediction tool can only be as good as the data on which it was trained on. If those data contains sequencing mistakes then more prediction mistakes will be done by the tool due to misleading training data.
What are transformers?
The transformer is a deep learning method to produce a contextual embedding of amino acids in the protein sequence. By using the masked language model objective, it is able to build a context around each position and learns to ‘attend’ or ‘focus’ on amino acids and peptides that are relevant in the given context. These language models have been found to encode contact maps, taxonomy and biophysical characteristics in their distributed representations
Zuletzt geändertvor einem Jahr