IOI2002 Questionnaire for Delegations about future IOI competitions Summary of further comments written on 52 returned questionnaires In some cases, spelling/syntax have been edited slightly. [Remarks in square brackets are mine, Tom Verhoeff] ======= Re Q. 1 (algorithmic nature) -- ---- The word "algorithm" may be understood in a more general sense. I introduced the noun "validating computations" and, correspondingly, "algorithm of validating computations", which provides not an exact result, but any consequence of data at each step. And what about "algorithms" with an oracle? Re Q. 3 (elementary solutions) -- ---- Elementary algorithms? Elementary implementations? Not quite sure you mean. Ill-defined ? Unclear questions [also for Q.4] What does the word "elementary" mean? There may be "infinitesimal" problems in mathematics, but all "solutions" (programs) reduce to 0-1 operations. What does it mean? (In the context of task BUS and current rules.) I do not know what kind of tasks can be called non-elementary. It's probably best that each competition day has one hard problem. It is debatable: where is the line between trivial and non-trivial, and the line between elementary and non-elementary, of course. Re Q. 4 (informatics) -- ---- I do not understand your consequent placing of mathematics opposite to informatics. There are lots of problems which seem to be mathematical, but can be nicely solved by computer and so this border is not so clear. It is debatable where the line between algorithms and math is. Where do the formal languages and automata stay, for example? We think they should be included. Re Q.16 (numerical computations) -- ---- If numerical computations appear in IOI tasks, then task descriptions should give all pertinent information about numerics! I.e. do NOT assume that local training told about them. [Do you also mean to include the relevant theory, or do you mena that no extra theory should be needed?] There are "traps", but this will separate bad and good contestants. The question is hard to understand (and answer) because of the negations. [I agree, sorry for that.] But we think that numeric algorithms are not suitable for secondary school contests. There are enough things in the numerics that even trained professionals in the area don't get correct the first time (see most numerical methods handbooks for examples). So, we think it's too much to ask from students. Re Q.17-19 (spread of difficulty levels) -- ------- We prefer the following ranges 0-20% 2 tasks 20-50% 2 tasks 50-80% 2 tasks It seems that the difficulty level of the competition is extremely high and this fact creates a big gap (similar to digital gap) among the advanced countries in the area and the less developed countries. It also seems to me that the competition slowly but steadily becomes the "competition of the leaders" and not the "competition of the students". This level of difficulty discourages young but still talented students to continue having interest in the competition [and the field of CS]. Perhaps it is about time to reconsider the directions of the IOI as it regards the types of problems and the difficulty in relation to the efforts made to attract new countries (who will still remain members of IOI) and an objective we should all have "to give equal opportunities to all contestants". Each day one easier task (an average contestant should solve these two); the point margins for medals should be approximately: 240 Bronze 300 Silver 400 Gold Distribution of tasks: Day 1: 1 easy, 1 medium, 1 hard Day 2: 0 easy, 2 medium, 1 hard Re Q.20 (transparency of scoring details) -- ---- A 'semi-transparent' scheme, such as "100 for optimal, 50 for up to twice optimal" is generally better than transparent scoring. [why?] Hidden bounds give no additional information to the contestant and so are valueless. Both. Transparency in conveying scoring expectations to the contestants is of the utmost importance, especially in tasks where the overwhelming majority are expected to get partial credit. I strongly support the suggestion made by Canada in the GA that some information about the test data be supplied with the task (e.g. how many "small" values, how many "medium" values, how many "large" values). This may, of course, not be appropriate for all problems. Re Q.21-22 (relative scoring) -- ------- Only for heuristic algorithms [sic] Not clear With transparency There must be at most one task with relative scoring. If the organizers don't know the [sic] solution, the task is not suitable. They have to know (at least) that e.g. the exact/best solution is NP-hard, and also have some ideas (and implement them!) on how to solve problem somehow. [For Q.22]: We don't recommend using many such tasks in one competition. [For Q.22]: By definition, contestants have no stopping criterion under relative scoring. Better question: ... if the organizers have no provably best solution in advance We agree with relative scoring so long as the organizers have a reasonable solution AND the students are not left with a completely open task. Tasks ----- Maybe one day give 10 tasks like: In a file are 999,999,999 different integers in the range 1 to 1,000,000,000 , i.e. one number is missing. Which? It is a very small program, about 10 lines in total, but in a short period of time to the solution (algorithm) is not easy. At least 50% of the tasks should have subtasks. [independent subtasks? independently graded?] The organizers must find tasks for which they have the solution. General ------- New ideas on the scientific side (like relative scoring) should be discussed one year before. Congratulations, and keep up with the good work! The fact that, by submitting [programs that output] sample cases from the task descriptions, one can score points, makes a statement like "only 6 contestants have scored 0 points" very superficial. I think we should avoid giving points for sample test cases. That would make grading and scoring much more transparent.