ESM1b
Encode protein sequences with the 33-layer ESM-1b Transformer to obtain rich sequence embeddings and masked-token predictions. The service supports GPU-accelerated batched inference for up to 8 sequences of length ≤1022, returning mean, per-residue, BOS, attention, and logit representations from configurable layers. These embeddings capture evolutionary and structural signals useful for downstream models such as mutational effect prediction, secondary structure or contact prediction, and remote homology search.
