Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
-
Updated
Dec 17, 2024 - Python
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
RoBERTa中文预训练模型: RoBERTa for Chinese
news-please - an integrated web crawler and information extractor for news that just works
The implementation of DeBERTa
🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
🤖 A PyTorch library of curated Transformer models and their composable components
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型
Add a description, image, and links to the roberta topic page so that developers can more easily learn about it.
To associate your repository with the roberta topic, visit your repo's landing page and select "manage topics."