Development of a stably working software package for identifying factual and semantic errors in academic essays, the result of which corresponds to the result of a specialist's work in a limited period of time.
HOW TO PARTICIPATE
- Read the Contest Terms and Conditions.
- Complete the registration form.
- Become the “Participant” of the contest.
- Propose a solution to the task of the qualification stage.
- Get an invitation to the final stage.
- Take part in the final event and become a winner by developing the best AI product that overcomes the contest technological barrier.
Not less than 6 months
Any Russian or foreign legal entity and individual is invited to participate in the Up Great contest READ//ABLE.
To take part in the contest the team should consist of at least 2 and maximum 10 members, including the team leader. The team may consist only of citizens of full legal age or equivalent as provided by the emancipation of minors procedure according to the legislation of the Russian Federation. If you don’t have a team yet — we will help you find one or form a new team.
For Russian speaking participants: to register for the contest, please fill in the form https://account.rvc.ru/login?src=relative_education.
For non-Russian speaking participants, please follow the link
May you have any difficulties with the form, do not hesitate to contact us at email@example.com.
First tests will tentatively take place in autumn 2020. Exact dates will be announced in late spring as far as the technical guidelines and first part of the dataset are published.
Testing includes following stages:
1. Technical. Participants connect to the server, download the dataset, detect errors and upload back to the server.
2. Main. Participants receive new essays that have not been published before and which teachers and specialists have not seen yet. They do the markup and upload back.
3. Verification. A technical stage, when the technical commission and the panel of judges check the results of the teams and the essays for an objective determination of errors and the level of the technological barrier. Expert results are automatically compared with teams’ results.
4. Announcement of the results
It will be evaluated by comparing with the average number of errors which a real teacher or specialist can find in the same documents under conditions of limited time.
Technical guidelines with the detailed description of the evaluation framework will become available in late spring 2020.
Then the edited txt-file is uploaded back via API and evaluated through the platform using the software solution provided by the organizers.
The technical guidelines will describe the procedure in detail.