Skip to content

Custom LLM models #442

@Moosdijk

Description

@Moosdijk

Since the application is still under development, don't report issues if you're using a different model than the standard gemma3:4b model

In order to skip downloading the model during install:
cmd -> ollama list : to get a list of local models, choose one you want to use instead and copy its name.
setup.ps1: change "gemma3:4b" to the name you've copied on lines 197 and 199.

This will not install the gemma3:4b model.

Next, I searched for gemma3 references in the .py files.
Do the same for the following files:
\apps\backend\app\services\job_service.py (line 23)
\apps\backend\app\services\resume_service.py (line 27)
\apps\backend\app\agent\manager.py (line 11) (line 43 references the embedding model)
\apps\backend\app\agent\providers\ollama.py (line 14) (line 58 references the embedding model)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions