Skip to content

Dependency-free neural network inference framework in a single file

License

Notifications You must be signed in to change notification settings

manu12121999/ctrl_c_nn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ctrl_C_NN

Dependency-free neural network inference framework in a single file.

CURRENTLY IN DEVELOPMENT

What is it for

Inference with simple neural networks where installing dependencies is not possible. This project is and will be dependency-free and has the most open open-source license. Whatever you need it for, just copy the single .py file into your project, and you can run an already-trained neural network.

What is it NOT for

Since it is written 100% in Python, its performance is terrible compared to PyTorch or numpy-based frameworks. It's not designed for the training of neural networks but to load and run simple Pytorch neural networks.

Sample Usage: Inference with pretrained PyTorch NN

import ctrl_c_nn
from ctrl_c_nn import Tensor, nn, ImageIO

input_image = ImageIO.read_png("dog.png", num_channels=3, resize=(224, 224), 
                               dimorder="BCHW", to_float=True, 
                               mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

model = SqueezeNet()  # when defining change all torch.nn to ctrl_c_nn.nn
model.load_state_dict(ctrl_c_nn.load("model.pth"))
output = model(input_image)
probabilities = ctrl_c_nn.utils.softmax(output[0], dim=0)

WIP

Description Status
Base Tensor class
Tensor operations (+, *, @)
Tensor Broadcasting
Tensor Shape Manipulation (e.g. reshape)
Simple Layers and Non-linearities
Forward pass of simple NN
Backward pass of simple NN 🔶 WIP
Convolutional Layers
Transposed Conv & Upsampling 🔶 WIP
Reading pth files
Forward pass of CNN
Backward pass of CNN
Image IO: Read PNG files
Image IO: Read JPG files
Image IO: Save images
...
...

Hopefully one day

Description Status
GPU Matmul (e.g. OpenCL)
Autograd
...

Sample Usage: Tensor

from ctrl_c_nn import Tensor

a = Tensor.zeros(2, 4, 8, 2)
b = Tensor.zeros((2, 8))
c = a@b  # shape (2, 4, 8, 8)
d = c[0, 2:, :, :1] + b.unsqueeze(2)  # shape (2,8,1)
e = d.reshape((1,2,4,2,1)) + 1  # shape (1,2,4,2,1)
f = e.sum(3)  # shape (1,2,4,1)
g = e.permute((3,0,2,1)) # shape (1, 1, 4, 2)

Sample Usage: Training a simple NN

from ctrl_c_nn import nn, Tensor

# it's the simplest to define the network as one Sequential
model = nn.Sequential(
    nn.Linear(20, 128),
    nn.LeakyReLU(),
    nn.SkipStart("a"),
    nn.Linear(128, 128),
    nn.LeakyReLU(),
    nn.SkipEnd("a"),
    nn.Linear(128, 2),
    nn.LeakyReLU(),
)
loss_fn = nn.MSELoss()

for i in range(2000):
    input_tensor = Tensor.random_float((8, 20))
    target_tensor = Tensor.fill(output_tensor.shape, 1.0)

    #  no zero_grad() atm (grads dont accumulate)
    output_tensor = model(input_tensor)
    loss = loss_fn(output_tensor, target_tensor)

    print("loss", loss.item(), " iteration", i)

    dout = loss_fn.backward(loss)
    dout = model.backward(dout)
    model.update(lr=0.001)

Releases

No releases published

Packages

No packages published

Languages