Skip to content

Quantization #6

@jeaneigsi

Description

@jeaneigsi

is it possible to load model on quantize version with bitsandbytes , i mean int4, int8 other other and how try 100 000 tokens lenght, i get message : Token indices sequence length is longer than the specified maximum sequence length for this model (798 > 512). Running this sequence through the model will result in indexing errors

and pretty good work, i share the same philosophie particularly i think t5 architecture is better for seq to seq task than decoder only. llm overgenerates and loss curves struggle to converge

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions