[CogSci'21] Study of human inductive biases in CNNs and Transformers.
-
Updated
May 18, 2021 - Jupyter Notebook
[CogSci'21] Study of human inductive biases in CNNs and Transformers.
Includes PyTorch -> Keras model porting code for DeiT models with fine-tuning and inference notebooks.
Source code for the "Computationally Tractable Riemannian Manifolds for Graph Embeddings" paper
Github code for the paper Maximum Class Separation as Inductive Bias in One Matrix. Arxiv link: https://arxiv.org/abs/2206.08704
Code for "Learning Inductive Biases with Simple Neural Networks" (Feinman & Lake, 2018).
Emergent Communication Pretraining for Few-Shot Machine Translation
A non-exhaustive collection of vision transformer models implemented in TensorFlow.
An Information Extraction Study: Take In Mind the Tokenization! (official repository of the paper)
Implementation code of GKD: Semi-supervised Graph Knowledge Distillation for Graph-Independent Inference accepted by Medical Image Computing and Computer Assisted Interventions (MICCAI 2021)
This is the official code for CoLLAs 2022 paper, "InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness"
This work provides extensive empirical results on training LMs to count. We find that while traditional RNNs trivially achieve inductive counting, Transformers have to rely on positional embeddings to count out-of-domain. Modern RNNs (e.g. rwkv, mamba) also largely underperform traditional RNNs in generalizing counting inductively.
Utility repository for the processing and visualizing NADs of arbitrary PyTorch models
Towards Exact Computation of Inductive Bias (IJCAI 2024)
Add a description, image, and links to the inductive-biases topic page so that developers can more easily learn about it.
To associate your repository with the inductive-biases topic, visit your repo's landing page and select "manage topics."