← Home


Table of contents

  1. Why this might be a good idea
  2. Open questions—or, why this might not be a good idea
  3. Comparison to existing tools

Why this might be a good idea

This is relatively feasible. (1) This can be kickstarted using Mechanical Turk. A bot could post questions to Mechanical Turk and retrieve answers. This provides a (possibly very bad) lower bound on answer quality, and upper bound on response times. (2) Unlike many other conceivable proposals related to knowledge representation and reasoning, it doesn’t require magic, i.e., it doesn’t require machine reasoning and knowledge representation beyond what is possible given the current state of the art. Instead, we can incrementally move towards automation as better natural language processing and knowledge representation tools become available.

This shares some features with successful companies. (1) It is plausible to me that cognitive resources are currently underused. A number of companies, including Uber and AirBnB, have built their success on making illiquid resources more liquid. Cognitive resources could be such an illiquid resources. (2) There is a clear business model: keep a fraction of each transaction as a marketplace fee. (3) There is an additional business model that stems from being a market participant (automation using bots). Together with (2), this follows a pattern of building infrastructure and then being its primary user, a pattern that Amazon has followed. (4) There is a huge market, ultimately, as illustrated by the myriad of existing tools that are special cases.

This could be substantially better than existing tools. It could be more compositional, with better incentives, more meta (redirecting part of the reward distribution process to the system, and optionally having discussions associated with it), more personalized/context-sensitive, more automated, and with lower activation threshold for asking questions (the initial question can be short and vague, in contrast to the well-formed, complete questions required by existing systems).

In the long run, this has the potential to create good jobs. People could earn money under good conditions—working from home, at arbitrary times, doing tasks that are fun and intellectually stimulating, and personally rewarding because people know that they are helping someone. People could specialize in particular topics (or are already specialized), and could make money using their specialist knowledge. Others could write bots that use existing answers to suggest answers to new questions. In the beginning, basically all work would be done by humans, but over time, opportunities for automation would be discovered. People could create a passive income stream if they created good answers (or, even more so, bots) that persist over time and that answer questions that many people ask. Companies could be built on top of this system; e.g. a company could offer a simple decision tree builder that uses lightweight natural language processing and that allows people to build domain-specific bots without programming experience.

This project is sufficiently meta. If the basic system works (i.e. if it is useful, even if far from perfect), we could outsource parts of the task of improving the system to the system. For example, we could have a dialog that discusses the rating system. More generally, dogfooding—using the system in the process of building the system—could be useful for two reasons: (1) A problem with many existing institutions is that they are insufficiently meta. Problems don’t bubble up to the top, good ideas go unused, and there is too little reflection on what is going on and what should change. This is at least in part because no good mechanisms exist for aggregating such knowledge. Organization-wide use of a dialog system like this could potentially help avoid this fate. (2) Using the system every day would make it less likely that one builds something that no one wants.

Open questions—or, why this might not be a good idea

How long/complex can individual questions and answers be? We want questions and answers to be short to encourage compositionality. Instead of giving a lot of detail in a single answer, we would like such detail to be distributed into subquestions and their answers. For example, we could restrict answers to a single sentence, or to tweet size, or otherwise encourage short answers. What is a principled solution for this?

Would this be legal? Prediction markets are mostly not legal in the US. We would have to make sure that the precise mechanism chosen complies with the relevant laws. Some mechanisms that wouldn’t work in public could still work within corporations (e.g., private prediction markets).

What about privacy? We want to share a lot of data between market participants for several reasons: (1) We want to share logs of dialogs to support the development of automated tools. (2) We want to re-use answers across dialogs. People may not want to ask some questions that they care about even in a pseudonymous setting, whereas they may do so if they are entirely anonymous.

What does a stripped-down version of this proposal look like? Building the entire system at once is probably too difficult. This raises the question which smaller system would be a good start. What is a minimal v0.1 that would still be useful? On a related note, what components does this proposal have that can be analyzed independently? For example, one could try to separately think about organizing knowledge in dialogs, about how to do an economically sound organization of a discussion forum, and how to accomplish spam/manipulation resistance using economic incentives.

Would response times be short enough? The non-instant nature of the system—i.e. the delays between asking questions and getting answers—could make it less fun to use. This isn’t different from (e.g.) StackOverflow and Quora, but there are different expectations for chat-like conversations and forum posts.

Would money introduce too much friction? We don’t want people thinking about money most of the time. This motivates having a default fee for most things as opposed to requiring a choice of payment amount at every step.

Would people be willing to pay? How expensive would dialogs be? People may be (perhaps unreasonably) unwilling to pay for soft things like knowledge that might feel difficult to justify. In effect, this is putting a lower bound on how reusable content needs to be (reducing the price per user), but this lower bound may be quite high.

What exactly should the incentive system look like? How do we validate it? Getting the incentive system right could be very difficult, and very important. At the same time, we wouldn’t have to get it right immediately—it could evolve over time.

What exactly makes dialogs useful, and are we capturing these benefits? There are at least two seemingly separate components that contribute to the usefulness of dialogs. First, the dialog is likely to be more relevant for the original asker than other pieces of text, because it was produced in response to the asker’s questions. Second, there are additional benefits due to the interactive nature of the generative process that produced the dialog: for example, if I hear a position on some issue, I might test it against three counterarguments that I randomly picked. If all are refuted by the person holding the position, I will have higher confidence that the position is solid than if the person holding the position had chosen the counterarguments to evaluate.

Comparison to existing tools

Google, in particular in combination with sites such as StackOverflow and Wikipedia, is great at answering factual questions like “What is the capital of Turkey?” or “How do I write a for-loop in JavaScript?”. Dialog markets aim to do the same for questions that require personalized answers.

There are two main differences between dialog markets and existing tools that provide personalized answers (such as Quora, Yahoo Answers, Reddit, and other online forums):

First, the system is designed to generate conversations that are as valuable as possible for the person asking a question. On existing platforms, answers are written at least as much for other readers as they are for the asker. If I am writing for a large audience, I can’t make my answers depend on very specific circumstances; on the other hand, if I am writing for a single user, I can ask many follow-up questions to make sure that I am really solving the underlying problem that prompted their question. To facilitate this focus on the asker’s values, credit assignment is the core problem that dialog markets need to solve well. In existing systems, credit assignment frequently seems like an afterthought. By default, I also expect dialogs to be semi-private (non-searchable), so that the person asking can more comfortably provide personal information.

Second, dialog markets are designed to allow for the incremental automation of contributions. All design choices are aligned with this goal: We use monetary rewards, since reputation-based systems (as on Quora and Reddit) are unlikely to incentivize people to build substantial automation on top of the system. We aim for robust credit assignment based on the asker’s values, since anything short of that can easily lead to low-quality contributions by profit-maximizing algorithmic participants. We favor individual contributions that are small, maybe single sentences, since it is much easier to automate such short contributions than to produce entire paragraphs that completely answer the question (as would be necessary in the case of Quora and Yahoo Answers).