Automatic image annotation (AIA) plays an important role and attracts much research attention in image understanding and retrieval. Annotation can be posed as classification problems where each annotation keyword is defined as a group of database images labeled with a semantic word. It is shown that, by establishing one-to-one corresponding between image region and semantic keyword is a feasible approach for automatic image annotation. In this paper, we proposed a novel algorithm, EMDAIA for automatic image annotation based on ensemble of descriptors. EMDAIA regards the annotation process as a multi-class image classification. The producers of EMDAIA are presented as follows. First, each image is segmented into a collection of image regions. For each region, a variety of low-level visual descriptors are extracted. All regions are then clustered into k categories with each cluster associated with an annotation keyword. Moreover, for an unlabeled instance, distance between this instance and each cluster center is measured and the nearest category’s keyword is chosen to annotate it. Experiment results on LabelMe, a benchmark dataset, shows EMDAIA outperforms some recent state-of-the-art automatic image annotation algorithms.