The rapid growth in the number of networked applications that naturally generate complex text data, which contains not only inner features but also inter-dependent relations, has created the demand of efficiently classifying such data. Many classification algorithms have been proposed, but they usually require as input fully labeled text examples. In many networked applications, however, the cost to label a text data may be expensive and hence a large amount of text may be unlabeled. In this paper we study the problem of classifying networked text data with only positive and unlabeled examples available. We present a non-negative matrix factorization-based approach to networked text classification by factorizing content matrix of the nodes and topological network structures, and by incorporating supervised information into the learning of objective function via a consensus principle. We propose a novel learning algorithm, namely puNet (positive and unlabeled learning algorithm for Networked text data), for efficiently classifying networked text, even if training datasets contain only a small amount of positive examples and a large amount of unlabeled ones. We conduct a series of experiments on benchmark networked datasets and illustrate the effectiveness of our algorithm.