Aerial Diffusion: Text Guided Ground-to-Aerial View Translation from a Single Image using Diffusion Models


Aerial Diffusion: We present a novel method, Aerial Diffusion, for generating aerial views from a single ground-view image using text guidance. Aerial Diffusion leverages a pretrained text-image diffusion model for prior knowledge. We address two main challenges corresponding to domain gap between the ground-view and the aerial view and the two views being far apart in the text-image embedding manifold. Our approach uses a homography inspired by inverse perspective mapping prior to finetuning the pretrained diffusion model. Additionally, using the text corresponding to the ground-view to finetune the model helps us capture the details in the ground-view image at a relatively low bias towards the ground-view image. Aerial Diffusion uses an alternating sampling strategy to compute the optimal solution on complex high-dimensional manifold and generate a high-fidelity (w.r.t. ground view) aerial image. We demonstrate the quality and versatility of Aerial Diffusion on a plethora of images from various domains including nature, human actions, indoor scenes, etc. We qualitatively prove the effectiveness of our method with extensive ablations and comparisons. To the best of our knowledge, Aerial Diffusion is the first approach that performs ground-to-aerial translation in an unsupervised manner.
Paper Code
AerialDiffusion Coming Soon


Please cite our work if you found it useful,

@article{Kothandaraman2023AerialDT,
  title={Aerial Diffusion: Text Guided Ground-to-Aerial View Translation from a Single Image using Diffusion Models},
  author={Divya Kothandaraman and Tianyi Zhou and Ming Lin and Dinesh Manocha},
  journal={ArXiv},
  year={2023},
  volume={abs/2303.11444}
}