Terms of participation

TECHNOLOGICAL BARRIER

Development of a stably working software package for identifying factual and semantic errors in academic essays, the result of which corresponds to the result of a specialist's work in a limited period of time.

TASK

The task for the teams is to create a system that automatically identifies and provides an explanation for the semantic errors, in a near real-time mode (no more than 30 seconds per essay), in texts in the essay genre (volume no more than 12,000 characters) of the following types:
Типы детектируемых ошибок
The topic of the essay is not covered
Типы детектируемых ошибок
Breaks in logic, conclusions do not follow from arguments
Типы детектируемых ошибок
Inappropriate comparisons and metaphors
Типы детектируемых ошибок
Factual errors
The technical guidelines for the contest, the evaluation methodology, examples of essays that are marked out in accordance with the methodology, the technical characteristics of the platform will be published by mid-summer 2020. Register to stay informed.

HOW TO PARTICIPATE

We welcome teams of 2 to 10 people of legal age. Any organization or individual can form a team to participate in the contest. Participants are not limited in choosing the software used and computing power.
  1. Read the Contest Terms and Conditions.
  2. Complete the registration form.
  3. Become the “Participant” of the contest.
  4. Propose a solution to the task of the qualification stage.
  5. Get an invitation to the final stage.
  6. Take part in the final event and become a winner by developing the best AI product that overcomes the contest technological barrier.

EVALUATION SYSTEM

Обработка решения
Participants' decisions are loaded into an automatic verification system, where the decision is processed by comparison with the standard.

CONTEST TIMELINE

During the competition there will be regular tests. Registration will be constantly open, the first cycle will take place in the second half of 2020.
FIRST CYCLE competition
December 11, 2019
Not less than 6 months
Registration open
Not less than 1 month
Qualification
Not less than 2 weeks
Final

FAQ

Who can participate?

Any Russian or foreign legal entity and individual is invited to participate in the Up Great contest READ//ABLE.

To take part in the contest the team should consist of at least 2 and maximum 10 members, including the team leader. The team may consist only of citizens of full legal age or equivalent as provided by the emancipation of minors procedure according to the legislation of the Russian Federation. If you don’t have a team yet — we will help you find one or form a new team.

Is there a registration fee?
There is no registration fee to participate in the contest.
How does the registration work?

For Russian speaking participants: to register for the contest, please fill in the form https://account.rvc.ru/login?src=relative_education.

For non-Russian speaking participants, please follow the link

May you have any difficulties with the form, do not hesitate to contact us at ai@upgreat.one.

Has the competition already started? How long will the qualification stage last?
The registration has been open since December 2019. Now, the technical guidelines for the contest and the dataset are under the development.
First tests will tentatively take place in autumn 2020. Exact dates will be announced in late spring as far as the technical guidelines and first part of the dataset are published.
What stages does the testing consist of?

Testing includes following stages:

1. Technical. Participants connect to the server, download the dataset, detect errors and upload back to the server.
2. Main. Participants receive new essays that have not been published before and which teachers and specialists have not seen yet. They do the markup and upload back.
3. Verification. A technical stage, when the technical commission and the panel of judges check the results of the teams and the essays for an objective determination of errors and the level of the technological barrier. Expert results are automatically compared with teams’ results.
4.  Announcement of the results

Are there any restrictions on the amount of data and requirements for the hardware?
There are no restrictions; teams can use any hardware and data that they consider necessary.
What amount of data is given? How will it be evaluated?
First part of the dataset (essays and compositions) will be published in early summer 2020. Participants can train and test their algorithms using any other data as only the end result will be taken into consideration.

It will be evaluated by comparing with the average number of errors which a real teacher or specialist can find in the same documents under conditions of limited time.

Technical guidelines with the detailed description of the evaluation framework will become available by July 2020.
In what form is the solution provided?
Participants connect to the platform via API, download txt files with a simple wiki or markdown-like markup language, with the help of which they mark detected errors.

Then the edited txt-file is uploaded back via API and evaluated through the platform using the software solution provided by the organizers.

The technical guidelines will describe the procedure in detail.
Any other questions?
Let’s get in touch! You can contact us at ai@upgreat.one

Contact information

SIGN UP FOR OUR NEWSLETTER

Thank you!