Portkey360
Your imagination, brought to life in 360°.
Your imagination, brought to life in 360°.
Portkey 360°, your imagination, brought to life in 360°. The project generates immersive panoramic environments from text descriptions, which can be explored directly in the browser.
It combines a Next.js + React Three Fiber frontend with a FastAPI backend serving Stable Diffusion v1.5 through Hugging Face Diffusers. The architecture demonstrates how generative AI can power interactive 3D experiences on the web.
Portkey 360° allows users to describe any scene in words and transform it into a 360° panorama within seconds. These outputs are rendered with React Three Fiber and a WebGL viewer, enabling intuitive navigation inside the generated scene.
The backend, built with FastAPI, handles text-to-image inference requests and integrates directly with PyTorch + Diffusers. Instead of containerization, the system is designed for local development and GPU execution, with manual environment setup for reproducibility.
⬆️ Prompt window for entering scene descriptions to generate 360° panoramas.
This separation of inference and rendering ensures responsive web performance while keeping the compute-intensive model logic isolated from the user interface.
One challenge was adapting Stable Diffusion to panoramic images. Training custom LoRA weights introduced distortions when converting to 360° projections, and the model struggled to generalize. I ended up relying on the base model with careful prompt engineering, which gave more reliable panoramas with fewer artifacts.
This decision taught me to weigh fine-tuning costs vs. practical reliability, a valuable lesson in building generative AI systems.
The next stage of Portkey 360° is to expand beyond static panoramas toward immersive and interactive experiences:
These extensions position Portkey 360° as a prototype for creative worldbuilding tools across web, VR, and AR platforms.