Graph embedding has shown its effectiveness to represent graph information and capture deep relationships in graph data. Most recent graph embedding methods focus on attributed graphs, since they preserve both structure and content information in the network. However, corruption can exist in the graph structure as well as the node content of the graph, and both can lead to inferior embedding results. Unfortunately, few existing graph embedding algorithms have considered the corruption problem, and to the best of our knowledge, none has studied structural corruption in attributed graphs, including missing and redundant edges. This field is difficult for previous methods, mainly due to two challenges: (1) the existence of various corruption causes has made it difficult to recognize corruptions in graphs, and (2) the complexity of graph-structured data has increased the difficulty of handling corruption therein for graph embedding methods. These facts lead us here to propose a novel autoencoder-based graph embedding approach, which is robust against structural corruption. Our idea comes from the recent discovery of memorization effects in deep learning. Namely, deep neural networks prefer to fit clean data first, before they over-fit corrupted data. Specifically, we train two autoencoders simultaneously and let them learn the reliability of the edges in the graph from each other. The two autoencoders would evaluate the edges according to their reconstructed structure and manipulate this by devaluing those distrusted edges to update the structure information. The updated structure would be used further in the next iteration as the ground-truth of its peer-network. Experiments on different versions of real-world graphs show state-of-the-art results and demonstrate the robustness of our model against structural corruption.