tsunami

log in
history

Content Analysis

Luke Breuer
2009-02-11 22:46 UTC

introduction
People currently put a lot of energy into processing ideas, some of which I think could ultimately be done by computers. Let's take an example: a controversial event just happened and ten different newspapers each have ten different stories on what happened. There are two ways to compare & contrast these articles:
  1. see how the presuppositions and hard facts compare
  2. see whether the logic is bulletproof

Currently, these actions are [probably] performed by printing out all the different articles and marking them up. Facts that are identical between articles are marked as so. The differences in any presuppositions are analyzed. Any faulty logic is identified as such. The end result is [hopefully] a well-balanced perspective on what actually happened. Now, this process takes a lot of effort. I think that making it so a computer can participate in this process will yield great dividends.
what pieces?
Every verifiable fact in a news article can be identified as such:
  • what is the nature of the fact?
  • how might someone go about verifying it?
  • what are some related facts?

Every presupposition can be analyzed:
  • what is the nature of the presupposition?
  • what is the history of the presupposition?
  • related presuppositions?

Currently, facts, presuppositions, and logic are all intertwined in news articles. Teasing them apart takes effort. Let's say that a given newspaper adheres to a presupposition I find false, one that significantly colors the articles. If the facts and other presuppositions are broken down digitally like I describe above, I can still gain useful information from the article without having to deal with a lot of logic and conclusions with which I disagree.
interesting questions
  • which newspapers consistently omit certain facts or classes of facts?
  • what are the underlying presuppositions of different journalists?
  • how does the addition of a fact or presupposition change a story?