Unmanned Aerial Vehicle (UAV) Cross-View Geo-Localization (CVGL) poses significant challenges due to the substantial view discrepancies between oblique UAV images and overhead satellite images. Existing methods heavily rely on supervised learning with labeled datasets to extract viewpoint-invariant features for cross-view retrieval. However, these approaches are computationally expensive, prone to overfitting region-specific cues, and exhibit limited generalizability to new regions. To overcome this issue, we propose an unsupervised solution that lifts the scene representation to 3D space from UAV observations for satellite image generation, providing a robust representation against view distortion. By generating orthogonal images that closely resemble satellite views, our method reduces view discrepancies in feature representation and mitigates shortcuts in region-specific image pairing. To further align the perspective of the rendered image with the real one, we design an iterative camera pose updating mechanism that progressively modulates the rendered query image with potential satellite targets, eliminating spatial offsets relative to the reference images. Additionally, this iterative refinement strategy enhances cross-view feature invariance through view-consistent fusion across iterations. As such, our unsupervised paradigm naturally avoids the problem of region-specific overfitting, enabling generic CVGL for UAV images without feature fine-tuning or data-driven training. Experiments on the University-1652 and SUES-200 datasets demonstrate that our approach significantly improves geo-localization accuracy while maintaining robustness across diverse regions. Notably, without model fine-tuning or paired training, our method achieves competitive performance with recent supervised methods.