
Frequently Asked Questions
Showing 50 of 2902 FAQs (Page 1 of 59)
Are there limits on re-submissions if my proposal is rejected?
Yes. The EIC increased allowable rejections from two to three before triggering any extended freeze rules. However, freezing periods and exact resubmission rules have changed historically, so verify current resubmission and freeze policies in the latest call documents before reapplying.
How has funding available under the EIC Accelerator changed recently?
In 2025 equity funding per project was reduced from €15 million to €10 million to create the separate EIC STEP Scale-Up program offering up to €30 million. Check the current call text to confirm available funding instruments (grants vs equity) and maximum amounts for your proposal.
Should I apply now or wait for potential 2026 changes to the EIC process?
Apply now if you can meet the August 4, 2025 Step 1 deadline: waiting risks having to redo work if templates change and missing a final 2025 opportunity. If you need substantial additional work, balance the risk of rework with the benefits of applying this year — but prioritize submitting by the announced deadlines when feasible.
What is the final Step 1 submission deadline for the EIC Accelerator in 2025?
You must submit your Step 1 short proposal by Monday, August 4, 2025, to be batched for evaluation on Tuesday, August 5, 2025. Missing this date means you won't be eligible for the October 1, 2025 Step 2 cut-off and would need to wait for 2026 calls.
Are there page limits or strict templates I should follow for Step 2?
Yes — Step 2 templates have page restrictions introduced in recent years, so you must adhere to those limits and formatting rules. Follow the latest template and page-count rules precisely; non-compliant applications risk desk rejection or being disadvantaged during evaluation.
What submission platforms are currently used for Step 1 and Step 2?
The EIC has shifted between platforms: after platform disputes in 2023 they moved back to PDF uploads for Step 2 and have used separate solutions for Step 1 at times. Always check the current call guidance for the exact submission method and file-format requirements before preparing files.
Will AI-generated proposals affect the evaluation process?
There are reports of many AI-generated proposals entering the system in 2025, which could strain evaluators and the review process. To stand out, ensure your proposal clearly demonstrates original technical depth, credible data, and strong human-driven validation rather than relying on generic AI-generated content.
What happens if I miss the 2025 deadlines — when can I apply next?
If you miss the 2025 cut-offs you'll need to submit for Step 2 in 2026, but exact 2026 deadlines aren’t published yet; they will be announced in the 2026 Work Programme in late 2025. Plan to monitor EIC communications in late 2025 and prepare your materials so you can hit any newly announced dates.
Could the EIC templates or process change before the next call?
Yes. The EIC has historically changed templates and processes frequently, so expect possible template updates or procedural changes for 2026. Treat your current application work as potentially reusable but be ready to revise it if new templates or rules are released.
If my Step 1 is successful, when is the Step 2 full proposal due?
Successful Step 1 applicants will be eligible to submit a full Step 2 proposal for the October 1, 2025 cut-off. Make sure to start drafting the full application early because Step 2 templates and page limits can be strict and time-consuming to complete.
What should the EIC do to handle the AI-driven workload problem?
The post suggests two viable options: cap the number and/or size of applications to reduce reviewer load, or provide evaluators with AI-assisted tools for triage, summarization, and consistency checks. Either action requires rapid implementation to prevent reviewer fatigue and maintain evaluation quality as application volume grows.
Should I use AI to write my EIC proposal?
Yes, but selectively. Use AI to draft, iterate, and streamline wording, but don't rely on it exclusively. You must supply accurate, EIC-specific inputs and review all outputs carefully to ensure they align with evaluation criteria and convey the strategic, technical, and market details a human reviewer needs.
Is it risky to have a human organizer oversee AI-generated proposals?
No — it’s advisable. A knowledgeable human organizer ensures proposals use correct inputs, stay aligned with EIC priorities, and maintain narrative coherence. They can validate AI outputs, add strategic nuance, and tailor the submission to reviewer expectations, which AI alone cannot reliably provide.
Why is the increase in proposals a problem for evaluators?
More proposals mean each evaluator has a heavier workload and less time to devote to any single submission. That leads to shallower reads, fewer clicks into nuance, and higher risk that important details are missed. Ultimately, quality of feedback and chances to convey subtle value propositions decrease when reviewers are overloaded.
What practical steps can I take right now to improve my chances under these changing conditions?
Combine AI efficiency with deep EIC-specific preparation: learn the evaluation criteria, craft precise prompts, and have an expert review and edit all AI drafts. Create a crisp executive summary and use explicit links to scoring criteria throughout the proposal. Finally, prioritize clarity and evidence over flowery language so busy reviewers can quickly grasp your project's value.
How can I make AI-written text easier for evaluators to read?
Structure the proposal clearly around the EIC criteria with headings, bullet points, and explicit links to impact, TRL, and commercialization plans. Keep sentences concise, avoid generic buzzwords, and highlight evidence, numbers, and milestones early. Include a short reviewer-friendly executive summary that quickly answers 'what', 'why', 'how', and 'impact'.
How has AI changed EIC Accelerator proposal evaluations?
AI has dramatically increased the volume and uniformity of proposals applicants can produce, while evaluators still must read and judge them manually. This creates an asymmetry: applicants can generate many polished drafts, but reviewers have limited time to scrutinize each one. The result is less time per proposal, reduced nuance in assessments, and potential frustration when AI-written content doesn't map clearly to evaluation criteria.
What does 'learn about the EIC' mean in practical terms?
Learn the EIC evaluation criteria, scoring rubrics, common weaknesses in funded projects, and the program's priorities such as impact and scalability. Read successful abstracts and rejection feedback if available, and tailor your narrative to the reviewers' expectations. This knowledge lets you craft prompts and edits so AI output fits what evaluators actually look for.
If evaluators can't use AI, does that hurt applicants?
Yes — it can. Applicants can use AI to produce more proposals and more polished language, but manual reviewers will have to parse those outputs without tools to help prioritize or summarize. This mismatch can make AI-generated text harder to evaluate if it isn't tightly aligned to the EIC score criteria, meaning applicants may be disadvantaged if their AI content isn't well-structured for human reading.
How soon does the EIC need to act on these evaluation challenges?
The blog warns that continuing the current process may become unsustainable by 2026, so action should be taken promptly. Changes like application limits or providing AI tools for evaluators can take time to design and roll out, so early planning and pilot programs are advisable. Stakeholders should push for rapid but carefully implemented adjustments.
What practical steps should I take if I want to target a 2025 Challenge quickly?
Prioritize completing and polishing your Step 1 submission before the 2025 deadline, ensure your technical and impact sections clearly map to the Challenge priorities, and prepare supporting evidence (TRLs, market data). If timing is tight, focus on the strongest elements that demonstrate fit and readiness.
What are 'Challenges' in the EIC Accelerator Work Programme?
Challenges are specific themed funding buckets within a Work Programme that target strategic technology or sector areas. They define priority topics applicants can apply under and may offer targeted calls or expectations tied to that theme.
Which Challenges were listed for the 2025 EIC Accelerator?
The 2025 Challenges included: acceleration of advanced materials development and upscaling; biotechnology-driven low-emission food and feed production systems; GenAI4EU for generative AI champions; innovative in-space servicing, operations, and space-based robotics; and breakthrough innovations for future mobility.
What should I do if I’m getting anxious about waiting for Step 1 results?
First, confirm your submission is marked as received in the portal and check your spam folder for any emails. If the usual 4–6 week period has passed, reach out to the EIC helpdesk with your submission ID; otherwise use the waiting time to prepare Step 2 materials in case you pass.
How long does Step 1 evaluation for the EIC Accelerator usually take?
Step 1 evaluations are typically reserved for 4–6 weeks. That means you should expect a decision somewhere between around 28 and 42 days after the submission deadline, though small variations can occur depending on the call.
Can I resubmit the same proposal under a different Challenge next year if it changes?
Yes, you can resubmit in a subsequent year under a different Challenge if it better matches the new Work Programme. However, changes in scope may require adjustments to your proposal to align with the new Challenge objectives, so plan revisions accordingly.
Will the 2025 Challenges still be available in 2026?
No — the listed 2025 Challenges will disappear after the last 2025 deadline and new Challenges will be defined for 2026. If your project fits a 2025 Challenge, you should act now rather than assuming it will be the same next year.
Does applying to a Challenge increase my chances compared to general calls?
Applying to a targeted Challenge can help if your project aligns closely with that theme, as evaluators look for relevance to the stated priorities. However, you still need strong technical readiness, market potential, and a quality proposal — alignment alone won’t guarantee success.
I submitted to the May 6, 2025 deadline — when did results come out for that round?
For the May 6, 2025 deadline the results arrived on June 13, 2025, which is 38 days after submission. This example sits well inside the usual 4–6 week window and can be used as a recent benchmark.
My deadline was June 3, 2025 — how long should I expect to wait?
With the June 3 deadline, you should expect results around 4–6 weeks after that date (so roughly early to mid-July). If the 4–6 week window passes without news, check your application portal and contact the EIC helpdesk for a status update.
How do I prepare my input so the AI produces a strong Step 1 proposal?
Use the new detailed input helper to fill form fields for every essential data point, and attach any supporting documents (technology descriptions, patents, customer lists). Add concrete numbers, customer names or links, and clear TRL and funding targets; the platform flags missing data so you can improve inputs before generating. Examples and info boxes in each field explain exactly what evaluators expect.
How does ChatEIC handle financials if I don’t have detailed projections?
ChatEIC generates full financial projections and a downloadable chart based on the input variables and Super Variables that calculate funding needs, co-financing, and timelines. You don’t need to prepare detailed spreadsheets before using it, though providing revenues, costs, and funding targets will improve accuracy. The Overview tab shows the underlying logic so you can review and tweak assumptions.
I already bought ChatEIC or the Starter Pack — do I get access to the new version?
Yes: previous purchasers receive free credits for the new ChatEIC version. To claim them, simply send the developer an email as mentioned in the announcement. Your existing links to Guide, Training, and Templates remain the same but have updated screenshots and guidance.
How can I verify and edit the AI’s research and references before submission?
Use the Overview tab to inspect DeepResearch outputs, see the references embedded in main sections, and review how each input variable was used. The platform shows at least six references in the main text and hyperlinks where you added sources, letting you double-check facts and replace or remove items. If you need further edits, adjust inputs or upload corrected documents and regenerate the proposal.
What is DeepResearch and why does it matter for my application?
DeepResearch runs three AI-driven research rounds on market, competitors, and relevant EU policy before the proposal is generated. This supplies references, facts, and context that most applicants miss and that evaluators value highly. The outputs are integrated into your proposal and visible in the Overview tab so you can verify and adjust them.
What happens if I leave some input fields blank — is input validation mandatory?
Input validation is optional: ChatEIC will warn you if data is missing but won’t force you to complete every field. This lets you generate proposals even with limited data while showing where additional detail would strengthen the application. For best results, respond to the flagged gaps before finalizing the document.
What’s the biggest difference between the new ChatEIC 2.0 and the old version?
ChatEIC 2.0 is a complete rebuild focused on guided input and automated research. It adds a detailed questionnaire with examples, uploads, input validation, and three rounds of AI DeepResearch (market, competitors, EU policy) before generating the proposal. The result is faster, more structured proposals with built-in formatting, financials, references, and placeholders for graphics.
Can I upload documents with technical details or should I paste everything into the form?
Yes — you can upload documents containing technical information, which the system uses alongside form inputs. Uploads are especially useful for complex tech descriptions, patents, or datasets that are hard to capture in form fields. The platform combines uploads with questionnaire variables so the AI places facts where they belong.
Will the generated proposal include proper formatting and visuals or do I need to edit it heavily?
The new output includes page numbers, hyperlinks, tables, references, images, and placeholders for a technical graphic and your company logo, so minimal editing is required. Team members are placed into formatted tables and references from DeepResearch are embedded in key sections. You only need to replace placeholders with final graphics and make any stylistic tweaks you prefer.
How long does it take to generate a proposal now, including DeepResearch?
With the new architecture, a server-rendered proposal takes about 7 minutes on average, including the DeepResearch phase. That’s faster than the previous 10+ minute average and covers market, competitor, and policy research as part of the run. Times can vary slightly based on document size and uploads.
What details does each grant profile show?
Each grant profile includes a detailed summary of eligibility criteria, topic information, and relevant metadata pulled from multiple sources. Where possible, Subsdy consolidates descriptions, deadlines, and links to original documents so you can quickly assess fit. If something’s unclear, use the grant link to view the primary source or reach out to support for clarification.
How do I get started and what information do I need to create a project?
Sign up on the Subsdy website and create a project by providing basic company information, goals, and keywords describing your activities or technology. The more specific you are about your objectives and eligibility (sector, company size, location), the better the AI matches will be. After creating a project, review the initial matches, like or hide grants, and share interesting opportunities with your team.
What are the pricing and trial options?
At launch there is a low-cost tier priced at €19/month, intended to make the tool accessible while features continue to be refined. Additional tiers with more advanced capabilities or team features may be available—check Subsdy’s pricing page for current plans. If you’re unsure, start with the low-tier plan to test the matching and search before committing to a higher plan.
How often is the grant database updated?
The platform performs daily synchronization with the EU portal and other sources to keep listings current. Because the EU database has inaccuracies (e.g., expired calls still marked open), Subsdy uses smart detection to verify whether grants are truly active. You can rely on near-daily freshness, and curated news posts highlight interesting new opportunities as they appear.
What is Subsdy and who is it for?
Subsdy is an AI-powered grant discovery platform that helps European companies and individuals find relevant grants and tenders. It’s designed for startups, SMEs, researchers, and grant writers who need to diversify funding beyond a single program. If you want curated matches from the full EU database without manually scanning hundreds of calls, Subsdy saves you time and surfaces opportunities you might miss.
How good is the search compared to the EU Funding & Tenders Portal?
Subsdy’s keyword search examines all fields and prioritizes the most relevant results, which tends to be faster and more accurate than the official portal’s search and filters. The EU portal can be slow and miss hidden calls; Subsdy aggregates multiple sources to improve discoverability. If you’ve struggled with the Funding & Tenders search, Subsdy’s keyword-first approach should feel much more productive.
How do you handle inaccuracies or expired grants listed as open?
Subsdy uses automated checks and heuristics to detect when a grant has actually closed despite being marked open on the EU portal, and it cleans up those entries during daily sync. The platform also integrates multiple sources to cross-verify listings and reduce false positives. If you find an error, report it via support and the team will investigate and correct the listing promptly.
Can I share grants and collaborate with my team?
Yes — Subsdy lets you like and hide grants for personal organization and provides shareable links so you can send specific opportunities to teammates. That makes internal review and assignment straightforward without exporting data manually. Use the project structure to keep company-specific searches and collaborators grouped.
How does the AI matching work and how reliable are the results?
You create a project with details about your company and goals, and the AI searches over 1,000 grants and tenders to produce prioritized matches, with each match giving up to 150 results. The AI ranks the most relevant opportunities but can sometimes make mistakes, which is why each match returns multiple options for you to review. Treat matches as a curated starting point—review eligibility sections and use the like/hide tools to refine results.
What grant coverage does Subsdy include?
Subsdy syncs the complete EU Funding & Tenders Portal and supplements it with DeepResearch and web scraping to create a more comprehensive index. You’ll find open and forthcoming EU grants, including some calls that are hidden or hard to find on the official portal, such as EIC Accelerator Step 2 calls. The database is focused on EU grants and tenders, so if you need non-EU national or private grant programs, check with support for scope.