This paper presents a hardness-aware deep metric learning (HDML) framework for image clustering and retrieval. Most previous deep metric learning methods employ the hard negative mining strategy to alleviate the lack of informative samples for training. However, this mining strategy only utilizes a subset of training data, which may not be enough to characterize the global geometry of the embedding space comprehensively. To address this problem, we perform linear interpolation on embeddings to adaptively manipulate their hardness levels and generate corresponding label-preserving synthetics for recycled training, so that information buried in all samples can be fully exploited and the metric is always challenged with proper difficulty. As a single synthetic for each sample may still be not enough to describe the unobserved distributions of the training data which is crucial for generalization performance, we further extend HDML to generate multiple synthetics for each sample. We propose a randomly hardness-aware deep metric learning (HDML-R) method and an adaptively hardness-aware deep metric learning (HDML-A) method to sample multiple random and adaptive directions, respectively, for hardness-aware synthesis. Extensive experimental results on the widely used CUB-200-2011, Cars196, Stanford Online Products, In-Shop Clothes Retrieval, and VehicleID datasets demonstrate the effectiveness of the proposed framework.