In this paper, we democratise 3D content creation, enabling precise generation of 3D shapes from abstract sketches while overcoming limitations tied to drawing skills. We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence. Leveraging the same part-level decoder, our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions, eliminating the need for a dataset pairing human sketches and 3D shapes. Additionally, our method introduces a seamless in-position editing process as a byproduct of cross-modal part-aligned modelling. Operating in a low-dimensional implicit space, our approach significantly reduces computational demands and processing time.

Unconditional Shape Generation

Network Diagram

Sketch Conditioned Shape Generation

Network Diagram

Shape Generation

Qualitative Generation Results

High fidelity generation of shapes from abstract doodles without training on hand drawn sketches

Fine-Grained Editing

Shape Editing

Fine grained sketch-shape correspondence allows us to perform highly localised shape edits through edits in sketches

Shape Interpolation

Shape Interpolation

Generated shapes can be smoothly morphed into one another by simple interpolation of sketch representations


        title={Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes}, 
        author={Hmrishav Bandyopadhyay and Subhadeep Koley and Ayan Das and Aneeshan Sain and Pinaki Nath Chowdhury and Tao Xiang and Ayan Kumar Bhunia and Yi-Zhe Song},
        journal={arXiv preprint arXiv:2312.04043},