Terms of participation


Development of a stably working software package for identifying factual and semantic errors in academic essays, the result of which corresponds to the result of a specialist's work in a limited period of time.


We welcome teams of 2 to 10 people of legal age. Any organization or individual can form a team to participate in the contest. Participants are not limited in choosing the software used and computing power.
  1. Complete the registration form.
  2. Read the Contest Terms and Conditions, Technical guidelines and other documents here.
  3. Form a team and join our channel in Slack #proj_upgreat_readable in Open Data Science.
  4. Get access to the contest IT-platform.
  5. Pass the qualification stage.
  6. Take part in the main tests of the current cycle.


Contest is held until December 2022 and is divided into cycles. Each cycle consists of registration, qualification and final stages.
If the technological barrier is not overcome in the current cycle, the next one is launched.
First cycle took place in November 2020. As no team could solve the task, the contest continues with the 2nd cycle to be launched in autumn 2021.
Registration is available anytime.
11.12.2019 – 29.10.2020
Registration for the 1st cycle
01.10 – 02.11.2020
Qualification 1st cycle
Tests for texts in Russian
Tests for texts in English
mid December
1st cycle results announced
AI specialist Jürgen Schmidhuber on inductive inference, universal Solomonoff prior and measuring probability of different events in our final lecture: “Can a computer predict the future?
AI specialist Jürgen Schmidhuber on Kurt Gödel, meta learning and fundamental limitations of computability in our fourth lecture: “Can a program rewrite its own code?”
Results of the first cycle of the contest announced.
AI specialist Jürgen Schmidhuber on backpropagation, vanishing and exploding gradient problems in our third lecture: “Long Short-Term Memory
Second lecture of our course on deep learning with Jürgen Schmidhuber, Scientific Director, Swiss AI Lab: “ Deep Feedforward Neural Networks
1st cycle tests for texts in English took place. 8 teams participated. Official results to be announced in mid December.
We launched a course on deep learning with Jürgen Schmidhuber, Scientific Director, Swiss AI Lab IDSIA. Lecture one “ How does pattern recognition work”.
1st cycle tests for texts in Russian took place. 9 teams participated. Official results to be announced in mid December.


The task for the teams is to create a system that automatically identifies and provides an explanation for the semantic errors, in a near real-time mode (no more than 30 seconds per essay), in texts in the essay genre (volume no more than 12,000 characters) of the following types:
Типы детектируемых ошибок
The topic of the essay is not covered
Типы детектируемых ошибок
Breaks in logic, conclusions do not follow from arguments
Типы детектируемых ошибок
Inappropriate comparisons and metaphors
Типы детектируемых ошибок
Factual errors
More information about the task can be found in the Technical Regulations
Technology Barrier

Technology Barrier

The correspondence of intelligent systems to the level of a person in   understanding of cause-effect relationships, analysis of facts and   argumentation in   texts written by a person.


Обработка решения
Essays marked up by AI algorithms of the participants are loaded into an automated platform that compares the solutions of different experts and AI. The quality of the participants' solutions is determined by the degree to which they match each other.
In the ranking table, two teams cannot take up a single line: the one with the highest relative accuracy of algorithmic markup value in the next decimal place.


Who can participate?

Any Russian or foreign legal entity and individual is invited to participate in the Up Great contest READ//ABLE.

To take part in the contest the team should consist of at least 2 and maximum 10 members, including the team leader. The team may consist only of citizens of full legal age or equivalent as provided by the emancipation of minors procedure according to the legislation of the Russian Federation. If you don’t have a team yet — we will help you find one or form a new team.

Is there a registration fee?
There is no registration fee to participate in the contest.
How does the registration work?

For Russian speaking participants: to register for the contest, please fill in the form

For non-Russian speaking participants, please follow the link

May you have any difficulties with the form, do not hesitate to contact us at

Has the competition already started? How long will the qualification stage last?

The contest was launched in December 2019. It is divided into several cycles of the tests. Each cycle consists of registration, qualification (access to the tests) and the tests. First tests take place in November 2020.

Results of the first cycle will be announced in mid December 2020. After that we will publish information on the next cycle.

Registration is open anytime.

What stages does the testing consist of?

Testing includes following stages:

1. Technical. Participants connect to the server, download the dataset, detect errors and upload back to the server.
2. Main. Participants receive new essays that have not been published before and which teachers and specialists have not seen yet. They do the markup and upload back.
3. Verification. A technical stage, when the technical commission and the panel of judges check the results of the teams and the essays for an objective determination of errors and the level of the technological barrier. Expert results are automatically compared with teams’ results.
4.  Announcement of the results

Are there any restrictions on the amount of data and requirements for the hardware?
There are no restrictions; teams can use any hardware and data that they consider necessary.
What amount of data is given? How will it be evaluated?

Sample text files in Russian and English are already published. Participants can train and test their algorithms using any other data as only the end result will be taken into consideration.

It will be evaluated by comparing with the average number of errors which a real teacher or specialist can find in the same documents in a limited time.

Technical guidelines with the detailed description of the evaluation framework are available.

In what form is the solution provided?
Participants connect to the platform via API, download txt files with a simple wiki or markdown-like markup language, with the help of which they mark detected errors.

Then the edited txt-file is uploaded back via API and evaluated through the platform using the software solution provided by the organizers.

The technical guidelines will describe the procedure in detail.
Any other questions?
Let’s get in touch! You can contact us at