SketchINR : A First Look into Sketches as Implicit Neural Representations

SketchX, CVSSP, University of Surrey, United Kingdom
CVPR 2024

Abstract

We propose SketchINR, to advance the representation of vector sketches with implicit neural models. A variable length vector sketch is compressed into a latent space of fixed dimension that implicitly encodes the underlying shape as a function of time and strokes. The learned function predicts the \(xy\) point coordinates in a sketch at each time and stroke. Despite its simplicity, SketchINR outperforms existing representations at multiple tasks: (i) Encoding an entire sketch dataset into a fixed size latent vector, SketchINR gives \(60\times\) and \(10\times\) data compression over raster and vector sketches, respectively. (ii) SketchINR's auto-decoder provides a much higher-fidelity representation than other learned vector sketch representations, and is uniquely able to scale to complex vector sketches such as FS-COCO. (iii) SketchINR supports parallelisation that can decode/render \(\sim\)\(100\times\) faster than other learned vector representations such as SketchRNN. (iv) SketchINR, for the first time, emulates the human ability to reproduce a sketch with varying abstraction in terms of number and complexity of strokes. As a first look at implicit sketches, SketchINR's compact high-fidelity representation will support future work in modelling long and complex sketches.


(i) We explore a latent space representation for vector sketches that implicitly encodes the underlying sketch as a function of time and strokes. (ii) We train an auto-decoder to reconstruct the input sketch from the latent space representation. (iii) SketchINR's auto-decoder provides a much higher-fidelity representation than other learned representations.

BibTeX

@inproceedings{bandyopadhyay-inr,
        title={{SketchINR : A First Look into Sketches as Implicit Neural Representations}},
        author={Bandyopadhyay, Hmrishav and Bhunia, Ayan Kumar and Chowdhury, Pinaki Nath and Sain, Aneeshan and Xiang, Tao and Hospedales, Timothy and Song, Yi-Zhe},
        booktitle={CVPR},
        year={2024}
      }