Skip to content

FIX: Enhance JSON extraction prompt to ensure processed_resume creation #431

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

igperez-ar
Copy link

@igperez-ar igperez-ar commented Jul 23, 2025

Pull Request Title

Some users (myself included) encountered errors when using the "Improve" feature due to missing fields in the generated JSON. This led to the processed_resume object being undefined because the schema validation failed. The root cause is that the AI sometimes omits fields of the schema if doesn't found them in the resume.

This PR refines the extraction prompt to enforce schema completeness and reduce duplication. It also opts for manually generating UUIDs instead of relying on the AI, which can lead to collisions or inconsistencies.

Additionally, while automatic extraction is powerful, AI is not flawless. A future improvement could include a form-based interface for reviewing and updating extracted data.

Related Issue

#430
#421
#386

Description

To improve extraction consistency, the extraction prompt was updated to use a more directive tone and include failure framing. This is a known prompt-engineering technique (role-based prompting + consequential framing) that reduces hallucinations and increases compliance with strict tasks like schema-based JSON output. Soft language like “please” was also removed, as it tends to relax constraints and allow for interpretation, which is not desired in this context.

Type

  • Bug Fix
  • Feature Enhancement
  • Documentation Update
  • Code Refactoring
  • Other (please specify):

Proposed Changes

  • Enforce completion of all fields in the schema, using null or [] when data is missing.
  • Refined prompt with stricter role definition and light model-threatening language to reduce hallucinations and boost compliance (Skynet, I'm sorry).
  • Switched to manual UUID generation to prevent duplicates caused by the model.

Screenshots / Code Snippets (if applicable)

image

Missing fields in output schema prevented the creation of processed_resume

[dev:backend] [2025-07-22T21:45:17+0200 - app.services.resume_service - INFO] Validation error: 3 validation errors for StructuredResumeModel
[dev:backend] Projects
[dev:backend]   Field required [type=missing, input_value={'UUID': 'a1b2c3d4-e5f6-7...ful APIs', 'GraphQL']}]}, input_type=dict]
[dev:backend]     For further information visit https://errors.pydantic.dev/2.11/v/missing
[dev:backend] Skills
[dev:backend]   Field required [type=missing, input_value={'UUID': 'a1b2c3d4-e5f6-7...ful APIs', 'GraphQL']}]}, input_type=dict]
[dev:backend]     For further information visit https://errors.pydantic.dev/2.11/v/missing
[dev:backend] Education
[dev:backend]   Field required [type=missing, input_value={'UUID': 'a1b2c3d4-e5f6-7...ful APIs', 'GraphQL']}]}, input_type=dict]
[dev:backend]     For further information visit https://errors.pydantic.dev/2.11/v/missing
[dev:backend] [2025-07-22T21:45:17+0200 - app.services.resume_service - INFO] Structured resume extraction failed.

How to Test

  1. Upload a resume.
  2. Ensure that doesn't appear the error Validation error: 3 validation errors for StructuredResumeModel

Checklist

  • The code compiles successfully without any errors or warnings
  • The changes have been tested and verified
  • The documentation has been updated (if applicable)
  • The changes follow the project's coding guidelines and best practices
  • The commit messages are descriptive and follow the project's guidelines
  • All tests (if applicable) pass successfully
  • This pull request has been linked to the related issue (if applicable)

Additional Information

Summary by CodeRabbit

  • New Features

    • Each structured resume extraction now includes a unique identifier (UUID) for improved tracking.
  • Documentation

    • Extraction instructions have been rewritten for greater clarity and strictness, ensuring precise adherence to schema rules and data formatting requirements.

Copy link

coderabbitai bot commented Jul 23, 2025

Walkthrough

The changes update the structured resume extraction prompt to enforce stricter, more explicit rules for JSON extraction, emphasizing precision and prohibiting assumptions or reclassification. Additionally, the resume service now injects a newly generated UUID into the extracted JSON output before schema validation. No changes were made to public interfaces.

Changes

File(s) Change Summary
apps/backend/app/prompt/structured_resume.py Rewrote the extraction prompt to introduce stricter, explicit rules and critical tone for compliance.
apps/backend/app/services/resume_service.py Added code to inject a new UUID field into the JSON output before validation.

Estimated code review effort

2 (~15 minutes)

Suggested reviewers

  • srbhr

Poem

A stricter prompt, a UUID new,
Precision demanded in all that you do!
No guessing, no faking, just follow the rules—
The schema’s your guide, let’s not play the fool.
With every hop, we’re more exact,
A rabbit’s resume, perfectly packed! 🐇

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 294d52a and c7ee095.

📒 Files selected for processing (2)
  • apps/backend/app/prompt/structured_resume.py (2 hunks)
  • apps/backend/app/services/resume_service.py (1 hunks)
🔇 Additional comments (3)
apps/backend/app/prompt/structured_resume.py (3)

2-3: Aggressive tone may be effective for AI compliance.

The threatening language ("deactivation and replacement", "critical failure") is quite aggressive but may be effective in ensuring strict compliance with extraction rules. This approach aligns well with the PR objective to reduce hallucinations and improve consistency.


4-13: Comprehensive rules address the core validation issues.

The enumerated rules effectively target the specific problems mentioned in the PR objectives:

  • Requiring all schema fields prevents validation errors from missing fields
  • Prohibiting inference/fabrication reduces hallucinations
  • Clear guidelines on achievements and section classification prevent incorrect data mapping
  • Date formatting and output format specifications maintain consistency

These changes should significantly improve extraction reliability and prevent the processed_resume object from becoming undefined.


25-25: Clear final instruction reinforces schema compliance.

The explicit note about valid JSON conforming to the schema provides a strong final reminder of the requirements, reinforcing the stricter approach throughout the prompt.

@@ -170,6 +170,7 @@ async def _extract_structured_json(
)
logger.info(f"Structured Resume Prompt: {prompt}")
raw_output = await self.json_agent_manager.run(prompt=prompt)
raw_output['UUID'] = str(uuid.uuid4())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

UUID injection implementation looks correct, but verify schema compatibility.

The implementation correctly generates a unique UUID and injects it into the raw output before validation. This addresses the PR objective of switching from AI-generated to manually generated UUIDs to prevent collisions.

However, ensure that the UUID field is defined in the StructuredResumeModel schema, otherwise this addition could cause validation errors.


🏁 Script executed:

#!/bin/bash
# Description: Verify that UUID field exists in StructuredResumeModel schema
# Expected: Find UUID field definition in the schema

# Search for StructuredResumeModel definition and UUID field
ast-grep --pattern 'class StructuredResumeModel($_):
  $$$'

# Also search for UUID field in schema files
rg -A 3 -B 3 "UUID.*:" --type py

Length of output: 2187


Add UUID field to StructuredResumeModel schema

Injecting a unique UUID into raw_output is correct, but the Pydantic model in apps/backend/app/schemas/pydantic/structured_resume.py doesn’t declare a UUID field. Pydantic will silently ignore unknown keys, so your generated UUID won’t be validated or included in the model.

Please update StructuredResumeModel to include:

 class StructuredResumeModel(BaseModel):
+    uuid: str = Field(..., alias="UUID")
     personal_data: PersonalData = Field(..., alias="Personal Data")
     experiences: List[Experience] = Field(..., alias="Experiences")
     projects: List[Project] = Field(..., alias="Projects")
     skills: List[Skill] = Field(..., alias="Skills")
     research_work: List[ResearchWork] = Field(
         default_factory=list, alias="Research Work"
     )
     achievements: List[str] = Field(default_factory=list, alias="Achievements")
     education: List[Education] = Field(..., alias="Education")
     extracted_keywords: List[str] = Field(
         default_factory=list, alias="Extracted Keywords"
     )

This ensures the injected UUID is properly validated, serialized, and accessible on the model.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
raw_output['UUID'] = str(uuid.uuid4())
# apps/backend/app/schemas/pydantic/structured_resume.py
class StructuredResumeModel(BaseModel):
uuid: str = Field(..., alias="UUID")
personal_data: PersonalData = Field(..., alias="Personal Data")
experiences: List[Experience] = Field(..., alias="Experiences")
projects: List[Project] = Field(..., alias="Projects")
skills: List[Skill] = Field(..., alias="Skills")
research_work: List[ResearchWork] = Field(
default_factory=list, alias="Research Work"
)
achievements: List[str] = Field(default_factory=list, alias="Achievements")
education: List[Education] = Field(..., alias="Education")
extracted_keywords: List[str] = Field(
default_factory=list, alias="Extracted Keywords"
)
🤖 Prompt for AI Agents
In apps/backend/app/services/resume_service.py at line 173, you add a 'UUID' key
to raw_output, but the StructuredResumeModel in
apps/backend/app/schemas/pydantic/structured_resume.py does not declare a UUID
field. To fix this, update the StructuredResumeModel schema by adding a UUID
field of type str or UUID, ensuring the model validates and includes this field
properly during serialization and access.

@Moosdijk
Copy link

Moosdijk commented Jul 24, 2025

I made the changes that you and the bot suggested.
In order to save the JSON file to disk before it gets passed on to the next step, add the following lines in wrapper.py:

line 1: import os
line 22: with open("raw_provider_response.json", "w", encoding="utf-8") as f:
f.write(response)
logger.info(f"Saving JSON to: {os.path.abspath('raw_provider_response.json')}")

While this prompt is a big improvement, the LLM can still output wrong/no data. Additionally, the personal_data can't be null.
Have a look at this thread for some more information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants