To clone this repository together with the required bert-japanese
:
git clone --recurse-submodules https://github.com/reo11/aes-for-japanese-learner
Please download our pre-trained model from here
and set a directory name trained_models
.
You can predict the score by executing the following code.
python predict_with_**.py [input.csv]
When executing LSTM
and BERT
models, it is better to use GPU as follows.
CUDA_VISIBLE_DEVICES=0 python predict_with_**.py [input.csv]
The input csv format of essays must be as follows:
text_id | prompt | text |
---|---|---|
ex1 | ... | ... |
ex2 | ... | ... |
The output csv format is as follows:
text_id | holistic | content | organization | language |
---|---|---|---|---|
ex1 | 3 | 3 | 3 | 3 |
ex2 | 4 | 4 | 4 | 4 |