Retrieving precise strategic insights at each decision-making step is critical for Large Language Model (LLM) agents. However, training effective retrieval models is often hindered by a scarcity of in-domain data and the inherent discrepancy between surface-level semantic similarity and functional relevance. In this work, we demonstrate that insight retrieval is fundamentally a procedural matching problem—the task of mapping concrete situations to abstract guiding rules—and show that this capability is transferable across domains. We propose InsightEmb, a contrastive training framework designed to learn abstract reasoning insights by training exclusively on mathematical reasoning data. Without exposure to in-domain training data, InsightEmb significantly outperforms base embedding models on diverse tasks, including the ALFWorld embodied environment, WebShop online shopping interactions, and SRA-bench agentic skill retrieval. Our results bridge the existing gap in standard embedding models by introducing a reasoning-aware embedding space.