A technological challenge is a form of organisation of R&D activities whereby several R&D teams address a given technological objective using a common testing environment set up for that purpose.
Technological challenges are particularly suited to studying complex systems, especially those involving artificial intelligence and machine learning, as they enable the objective and comparable measurement of the performances of such systems.
In a technological challenge, the organiser defines common experimental protocols in cooperation with the participating R&D teams before providing the environment enabling the teams to test their systems according to these protocols.
Rewards and Benefits
- Unique opportunity for innovators in a given domain to progress faster:
- Gain privileged access to resources tailored to the objective
- Perform unbiased experimental measurements
- Gain first-hand information about the state-of-the-art
- Networking among the best teams in the domain
- Trustworthy results that can help the teams to get further support and funding
How does it work?
Ongoing technological challenges:
The first EDF technological challenge addresses the topic of improvised explosive device (IED) and landmine detection. It was launched under the EDF 2022 calls for proposals and started on 1st December 2023. The organising consortium is supported by the EDF under the project HiTDOC. Four teams participate in the challenge, each supported under an EDF project (AIDEDex, CONVOY, DeterMine, TICHE). Dry-run tests are under preparation.
The second EDF technological challenge addresses the topic of Human Language Technologies (HLT). It has been launched under the EDF 2023 calls for proposals and the projects have been selected in May 2024. The organising consortium will be supported under the project ARCHER. Three participating teams will be supported under EDF projects (AtLaS, LINGUARISE-DC, NEMO). The challenge will start by the end of 2024.
Apply for 2024 Calls
Upcoming technological challenges:
The upcoming EDF technological challenges will address robust autonomous drone navigation and multi-source satellite image analysis. They have been launched under the EDF 2024 calls for proposals. The calls are currently open for participating or organising these challenges:
Robust autonomous drone navigation | Multi-source satellite image analysis | |
For Organisers | Access the call topic | Access the call topic |
For Potential Participants | Access the call topic | Access the call topic |
The EDF eligibility rules apply to entities and consortia taking part in technological challenges organised under the EDF.
In particular, the organising consortium of an EDF technological challenge should include at least three eligible legal entities established in at least three different Member States or associated countries (Norway).
The same rule applies to each of the participating consortia. The complete eligibility criteria are described in Chapter 6 of the call document.
Organising technological challenges is necessary to measure the performances of systems involving artificial intelligence and machine learning in an objective, comparable and transparent way.
Indeed, since systems learn, the test data should not be provided to system developers beforehand to avoid any bias in the measurements. However, to foster progress, the test data should be available to them after the test phase completion for analysing and comparing the results.
It is therefore necessary to organise test campaigns whereby common test data is disclosed simultaneously to all participants. This need for a specific organisation to evaluate systems with learning capabilities is analogous to the need to organise exams to evaluate students.
Beyond the field of artificial intelligence, organising technological challenges is also needed when setting up and servicing testing environments of adequate size to address a given technological objective can only be done temporarily.
Another interesting feature of technological challenges is that they offer a unique blend of mission-driven R&D steering and openness to innovative approaches. They also contribute to community building and to the visibility of a domain.
Technological challenges have been used for several decades in various R&D programmes. They require careful planning and a tight coordination among stakeholders, but they have proved to be instrumental in the acceleration of progress in R&D.
A technological challenge is generally composed of a series of test campaigns organised over several years.
Except for high technological readiness levels, for which shorter durations are relevant, durations of 4 to 5 years are customary.
Under the EDF, technological challenges are launched through dedicated calls for proposals including two topics per challenge, one for the challenge organisation and one for the participation to the challenge.
Calls for technological challenges also include preliminary evaluation plans enabling applicants to prepare projects that can cooperate smoothly with one another. During the challenge implementation, the organising team develops a detailed evaluation plan for each test campaign, in close cooperation with the participating teams.
The topic selection for technological challenges follows the same procedure as for other topics in the EDF.
Technological challenges are a modality of organisation which is chosen when appropriate for some topics.
Organisers and participants rely on different competencies and generally belong to different entities.
However, in specific cases (e.g. different divisions or departments in large organisations), an entity can be both a member of the organising consortium and a member of a participating consortium, provided that this does not lead to any bias in the testing protocols and measurements.
In particular, participants should not have any privileged access to test data.
It is in the interest of the participants to make the most of the foreseen testing environment, and therefore in principle to participate for all tasks of each evaluation campaigns in a technological challenge. In addition, the participation of several teams for a given task is needed to ensure comparability across solutions and contributes to the value of the evaluation results.
Maximizing participation for each task is therefore a goal when elaborating the detailed evaluation plans during the challenge implementation. There might be particular cases where some participants do not participate for some tasks if duly justified.
While some tasks might not be covered every year for practical reasons, repeating the evaluation process in the same conditions over the years is needed for measuring progress.
Applicants involved in different proposals are free to communicate with one another to contribute to a smooth cooperation later on during the challenge implementation, provided that this does not lead to any distortion of the conditions of competition or to any bias in the testing protocols.