One of the biggest values I see from computational text analysis is to be able to see how language was used in the past, and the present. For historical documents, mining is something that gives us the potential to look at how things were written, how certain topics were written and almost any other question that might relate to the past and how words were used. From an anthropology perspective, this is incredibly important for looking at colloquial language and understanding how geographic regions and cultures use language differently based on context. Additionally, for some anthropologists(including myself) the use of language and frequencies of certain words has become particularly interesting in terms of digital landscapes. One of my particular interests is looking at the buzz words used in online auction platforms to advertise for the sale of illicitly trafficked artifacts and antiquities. This is helpful more to the heritage/museum scope of people to be able to predict how things are trafficked and sold and attempt to prevent further damage to cultural heritage and historical contexts.
In its most basic nature, data of text mining is useful to be able to look at frequency of words and from there provide visualizations and statistics of the data that is given. It helps us to look at documents in the context of how they were developed and from there give possible historical insights into the nature of the documents.
One downfall of this method, from my understanding, is that if the texts are not initially digital the programs that analyze texts will not be able to read them. For scanned documents they are simply images that are readable to humans, but not computers. Often for historical documents this is the case; therefore, these texts must have metadata that transcribes all of the text into text readable to a computer. This creates an imperfect system that causes handwritten work to still be present, although not as detailed as it might have been before the time of computational text analysis.