In this repo it is presented an implementation of a Deep Autoencoder architecture trained on the MNIST database in FPGA, focusing on machine vision tasks for the data reconstruction and classification in the latent dimension. To implement machine learning (ML) models in FPGAs, a companion compiler based on High-Level Synthesis (HLS) called hls4ml is used. Furthermore, an optimization using both compression and quantization of Neural Networks is performed to obtain sensible reduction in model size, latency and energy consumption.