I forked a really popular implementation that reverse engineered the Google Dreamfusion algorithm. This algorithm is closed-source and not publicly available.
The implementation I forked is [here](https://github.com/arontaupe/stable-dreamfusion)
This one is running on stable-diffusion as a bas process, which means we are are expected to have worse results than google.
The original implementation is [here](https://dreamfusion3d.github.io)
The reason i forked the code is so that i could implement my own gradio interface for the algorithm. Gradio is a great tool for quickly building interfaces for machine learning models. No code involves, any user can state their wish, and the mechanism will spit out a ready-to-be-rigged model (obj file)
## Mixamo
I used Mixamo to rig the model. It is a great tool for rigging and animating models. But before everything, it is simple. as long as you have a model with a decent humanoid shape in something of a t-pose, you can rig it in seconds. Thats exactly what i did here.
Due to not having access to the proprietary sources from google and the beefy, but still not quite machine-learning ready computers we have at the studio, the results are not quite as good as i hoped.
But still, the results are quite interesting and i am happy with the outcome.
A single generated object in the Box takes roughly 20 minutes to generate.
Even then, the algorithm is quite particular and oftentimes will not generate anything coherent at all.