Purpose: to increase the rate at which people can absorb information/knowledge.
To do this extremely well, we need to have a much better understanding of how knowledge is structured (and how people absorb it) than we currently have attained.
A good start would be ultra-high quality writing to explain things. This would need to be combined with an easy-to-use feedback system:
- easy to give feedback
- easy for authors to relate the comments to their articles
- make it easy for further communication to happen (see collaborative editing)
We want to be able to build high quality articles out of simple discussions, Q&A, etc. Instead of continually answering the same questions over and over, we want to continually refine and enhance articles, adding in caveats, edge cases, and notes that provide more and more of the type of information that currently has to be gleaned from many sources. Of extreme importance is to link all discussions related to existing articles, to
those articles. We want to make it easy for people to navigate to related articles and discussions.
All data need to be linked and related — not up front, but in an iterative process, as relationships are discovered and reinforced.
A discussion about tags
would probably be useful...
Fundamental rule: reduce noise and polling as much as possible, without rejecting too much valuable material.
Entering and manipulating information must be extraordinarily easy. It must scale, from storing a phrase or sentence in a second to parsing and processing a large, intricately described document.
Searching must scale from simple full text search to SQL-like power. Initially, we will probably go with operators like Google's intitle:
We want to slowly "understand" articles on a more and more granular basis. For example, it would be nice to be able to pull regex information about documentation from all the various Unix utilities. It would also be useful to be able to gather security considerations for all modules used in a given web application.