Login | ![]() |
![]() |
Below you can find the instructions and set of guiding questions we are providing to presenters at our wrap-up meeting. Did we miss something? Feedback is welcome!
Directions for Challengers and Assessors:
When planning your talk for the wrap up meeting and for the following discussions, start by giving an overview of what you did. Then, consider addressing additional questions below, if appropriate. We look forward to hearing your impressions. Please also limit the # of slides to be no more than the # of minutes allocated for your talk. The questions below apply to both map and model challenges unless otherwise noted.
A. Overview Questions
Which target(s) did you work on and rationale for choice.
What method(s)/software did you use and rationale for choice.
What result did you obtain? Was it what you expected?
B. Global Questions
Did the challenge datasets/structures cover well the relevant issues in single particle reconstruction in cryoEM / model building and refinement in cryoEM?
Are challenges of this type using publicly available datasets generally valuable?
Either way, what would you do to change this to be more useful?
In your opinion do most software packages produce the same results with the input data acting as the most important element or do you think that different packages will get different results? Did the challenge challenge your mind on this in any way?
If you looked at the final results, did anything stand out regarding the differences between results achieved by different packages/submitters?
Do you think that any of the results indicate that there are some best practices that should be followed in the field?
Considering new developments in computing, especially in GPU’s, is access to computing resources still an issue in cryoEM?
C. Challenge Specific
Did you have any difficulties accessing or getting started with using the challenge data?
Maps: did you have any difficulty at the starting model stage?
Were the instructions for participating in the challenge sufficient/clear?
Was the information requested about your results adequate for the task? Would you have asked for something different?
Did particular targets cause problems? If so please comment about issues faced.
Did you have sufficient access to computing resources? Was access to computing resources a limitation in participating in this Challenge?
For non-developers: what factors influenced your choice of software for use in the challenge?
What method(s) did you use for monitoring the progress of map refinement / model building/refinement.
D. Assessment Specific
Did you have any difficulties accessing or getting started with the challenge results data?
Were the instructions for participating in assessment clear?
What do you conclude about challengers results? What are the most divergent areas.
Are there common misconceptions in processing or validating cryoEM maps / models derived from cryoEM maps?
What are recommended standards/metrics for evaluating medium-resolution maps / models derived from cryoEM maps?
What should be the minimum metrics to report for manuscript submission to a journal?
E. Questions for the Wrap Up Discussion on Sunday
Future challenges: What should be the frequency, and the topic, of future Challenges?
In future challenges, should the identity/origin of the datasets be hidden?
Journal Special Issue: Would you prefer to submit a separate paper on your (or your group’s) results or to collaborate on a summary paper with other participants?
EMDataResource Validation Challenges are supported by NIH National Institute of General Medical Sciences
Please send your challenge questions, comments and feedback to challenges@emdataresource.org
Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer