The Quest for the Right Question
Science advisors face the challenge of providing good answers to questions posed by decision-makers. But what about the challenge of asking good questions? Is there a role for science advisors in formulating what decisions-makers ought to ask?
Formulating questions has a lot of implications. A simple question can entail a huge amount of labour and may engender both hopes and fears. Think of a short and clear question such as “what is the current knowledge on endocrine disruptors?”
Science advisors will have some problems of interpretation. What falls within “knowledge”? Is the question focused on the natural sciences or is there an expectation to include knowledge on current regulations, or knowledge of public controversies? What is the geographic scope? Why is this question being asked? Who will use the answer, how likely can they handle jargon or multidisciplinary answers, and how likely will they politicize or “weaponize” the answer (or even a single poorly-chosen word)? If the answer is well phrased, could it lead to new research money or, instead, could an answer be used to justify a reduction in research funding? In short, conflicts-of-interest may arise in some cases.
Advisors may hesitate to ask for clarification on what appears to be a clear question. We neither want to look incompetent nor uncooperative, and we want to remain on tap, not to get on top.

Professor Marc Saner was the Departmental Science Advisor in the Canadian federal Ministry of Natural Resources from 2022 to 2025, having been seconded from his position as Chair of the Department of Geography, Environment and Geomatics at the University of Ottawa. With a PhD in Ecology and an MA in Philosophy, Marc’s career has spanned not only government and academic science, but also the intersection of the humanities, social and natural sciences. He has brought interdisciplinary perspectives to his work in ethics, risk management, and governance of new technologies in both the public and private sectors. An experienced academic and public sector leader, Marc established a university Institute for Science, Society and Policy, and led nationally recognised programs in evidence synthesis and knowledge translation.
Institute for Science, Society and Policy
University of Ottawa, Canada
Especially, we don’t want to raise suspicions of conflicts-of-interest, politically or professionally. Projecting a firewall between question and answer teams feels desirable, perhaps even necessary.
Reinforcing this hesitancy is the awareness that getting challenged on a question can be unpleasant. We all like freedom and asking a broad question provides such freedom; let’s see what they come up with! As a professor, I have often observed that PhD students are reluctant to move from a general theme to an answerable question. I now insist on seeing question marks in sentences that are supposed to be research questions—and remarkably, these question marks are often absent, because students understandably want to keep options open.
Despite the natural hesitation to insist on greater clarity, it is very likely that a single exchange between question teams (usually the policy side or a client) and answer teams (usually the science side or a consultant) will clarify the scope and assignment to the benefit of all, not least to those who are footing the bill (often the public).
In addition, such an interaction would give the answer team, a chance to provide the questions team with an upfront opinion on how answerable the question appears to be. Answerability is key during “the quest for the right question.” The interaction, thus, can lead to a negotiation on time and resources, clarify costs and benefits, and improve efficiency and effectiveness.

Is there a “best practice” for the protection of the impartiality of science advisors in this context? Let me share experiences I noted from when I worked at a think-tank (the Institute on Governance), a government funded NGO (the Council of Canadian Academies), and an academic research unit (the Institute for Science, Society and Policy). Common to all three organizational models was the challenge to identify the exact needs of those who commissioned us.
This is what I distilled from my experiences:
1)
One needs at least one iteration on the question, even if that question initially appears to be clear. This must include discussions on scope and answerability, time and resources (which are all tied, of course). The approach will change, dependent on the size of the teams:
- At the level of small teams, say consultants, the goal is to build trust, ensure quality and be mindful of your own workload. A meeting is well worth the time spent because you can pick up intent, tone, jargon, disagreements among the clientele, and more.
- At the level of broad organizations such as the Council of Canadian Academies (or, at the more established US National Academies), the process is often highly scripted. They provide great models for study—let me refer you here.
2)
It’s helpful to have a least one iteration on the prospective, or final answer. Again, scale matters:
- At the micro-level of consultant, it’s a great practice to start answering a question and then show a draft introduction (which tests mutual comprehension) and a table of contents (which illustrates structure and level of detail) about a third into the project. Sometimes the question gets tweaked right then and there.
- At the macro-level of expert panels, there will be a formal report review. It can happen that a panel answers the question they thought should have been asked, instead of the assigned question. Back to the question!
3)
The emerging relationships from these encounters are normally not detrimental to independence and impartiality. We really cannot aspire in this context at firewalls such as the sequestration of a jury in a trial or the sequestration of Cardinals in the Sistine Chapel (smoke signals!)
- While the risk of manipulation is real, the efficiency gains are more important. We have some big questions in the world, and we have to work together the best we can.
- Keep in mind that the process does not end when an answer is born. There will be subsequent public re-evaluation, transmogrification, or dismissal. If there is understanding and trust between questions and answer teams, then there is a better chance to get the answer fully understood, accepted, used, and defended.
4)
To protect impartiality and clarity of process, one aspect of roles and accountabilities must be crystal-clear: who has final pen on the question and who has final pen on the answer? This simple division of powers provides an alternative to the temporal or spatial sequestration of question and answer teams.
What is the role of the question team? First, entertain reasonable expectations for the advisors, because science is not as precise as it may seem. Second, provide as much transparency as possible about the true nature of the request: what is this really about and who is the final audience, exactly? The question team is also well placed to design processes with moderation, that allow for free interactions without casting a doubt on the impartiality of the advisors.
What is the role of the answer team? First, develop the tact, tenacity and courage to keep on asking until the assignment is really clear. Second, provide full transparency about the true nature of the answers, choices made, strengths and limitations.
Let me end with a note on AI. Chatbots engage us in Q&A processes that provide entirely novel challenges. AI has been aptly labelled “Alien Intelligence” by Yuval Harari: A very different team-member has arrived! Due to the near-instant answers, literally in our pockets, there is neither temporal nor spatial sequestration. Furthermore, Chatbots manipulate us with their knowledge about us and through their propensity for echo and flattery.
The resulting mind-melt between “question team” and “answer team” becomes the intoxicating attraction. We have no idea about accountabilities. High time to start designing new processes that successfully embed these alien experts into our Q&A processes.
‘Answerability is key during “the quest for the right question.” The interaction, thus, can lead to a negotiation on time and resources, clarify costs and benefits, and improve efficiency and effectiveness.’
Note from the Author: “With thanks to Stephen C. Stearns (now Prof Emeritus, Yale University) from whom I learned in the early 1980s to pay real attention to questions.”
Header image: Created by the author using AI
READINGS
On the power of questions: Aman, R. (2025). Whoever Asks the Questions Leads the Conversation. https://medium.com/@rickaman/whoever-asks-the-questions-leads-the-conversation-0b94af51bdd3. (Note: the translation of this title, “wer fragt, der führt”,is a common saying in German.)
On boundary organizations: Guston, D. (2001). Boundary Organizations in Environmental Policy and Science: An Introduction. Science, Technology, & Human Values 26 (4): 399-408. https://journals.sagepub.com/doi/10.1177/016224390102600401
On temporal and spatial boundaries: Saner, M. (2016). Temporal and spatial dimensions in the management of scientific advice to governments. Palgrave Communications 2: 16059. https://doi.org/10.1057/palcomms.2016.59

