You are here

Wrap-Up Workshop: Guiding Questions

Below you can find the instructions and set of guiding questions we are providing to presenters at our wrap-up meeting.  Did we miss something? Feedback is welcome!


Directions for Challengers and Assessors:

When planning your talk for the wrap up meeting and for the following discussions, start by giving an overview of what you did. Then, consider addressing additional questions below, if appropriate. We look forward to hearing your impressions. Please also limit the # of slides to be no more than the # of minutes allocated for your talk. The questions below apply to both map and model challenges unless otherwise noted.


A. Overview Questions

  1. Which target(s) did you work on and rationale for choice.

  2. What method(s)/software did you use and rationale for choice.

  3. What result did you obtain?  Was it what you expected?


B. Global Questions

  1. Did the challenge datasets/structures cover well the relevant issues in single particle reconstruction in cryoEM / model building and refinement in cryoEM?

  2. Are challenges of this type using publicly available datasets generally valuable?

  3. Either way, what would you do to change this to be more useful?

  4. In your opinion do most software packages produce the same results with the input data acting as the most important element or do you think that different packages will get different results?  Did the challenge challenge your mind on this in any way?

  5. If you looked at the final results, did anything stand out regarding the differences between results achieved by different packages/submitters?  

  6. Do you think that any of the results indicate that there are some best practices that should be followed in the field?

  7. Considering new developments in computing, especially in GPU’s, is access to computing resources still an issue in cryoEM?


C. Challenge Specific

  1. Did you have any difficulties accessing or getting started with using the challenge data?

  2. Maps: did you have any difficulty at the starting model stage?

  3. Were the instructions for participating in the challenge sufficient/clear?

  4. Was the information requested about your results adequate for the task? Would you have asked for something different?

  5. Did particular targets cause problems? If so please comment about issues faced.

  6. Did you have sufficient access to computing resources? Was access to computing resources a limitation in participating in this Challenge?  

  7. For non-developers: what factors influenced your choice of software for use in the challenge?

  8. What method(s) did you use for monitoring the progress of map refinement / model building/refinement.


D. Assessment Specific

  1. Did you have any difficulties accessing or getting started with the challenge results data?

  2. Were the instructions for participating in assessment clear?

  3. What do you conclude about challengers results? What are the most divergent areas.

  4. Are there common misconceptions in processing or validating cryoEM maps / models derived from cryoEM maps?

  5. What are recommended standards/metrics for evaluating medium-resolution maps / models derived from cryoEM maps?

  6. What should be the minimum metrics to report for manuscript submission to a journal?


E. Questions for the Wrap Up Discussion on Sunday

  1. Future challenges: What should be the frequency, and the topic, of future Challenges?

  2. In future challenges, should the identity/origin of the datasets be hidden?

  3. Journal Special Issue: Would you prefer to submit a separate paper on your (or your group’s) results or to collaborate on a summary paper with other participants?

EMDataResource Validation Challenges are supported by NIH National Institute of General Medical Sciences

Please send your challenge questions, comments and feedback to

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer