First start Virtual Environment and then install:
pip install scrapy
pip install peewee
pip install scrapy_jsonschema
pip install django
pip install classla
npm install react-router-dom@6
npm install axios
npm install bootstrap@4.6.0 reactstrap@8.9.0 --legacy-peer-deps
npx create-react-app frontend
pip install django_filter
pip install djangorestframework
pip install djangorestframework django-cors-headers
To run the spiders generating the databases:
python GovScraper/start.pyThis code will run 2 spiders:
First to generate the urls for all the articles in a links.db
Second to generate articles.db with all the necessary information from the articles
python NLP/main.pyThis code will get all of the name enteties in every article and put them in articles.db
First, go to https://aws.amazon.com/ and create a free-tier account. Then, choose the EC2 service. After that, you have to pick on what OS the server will run. Choose Debian-11. Then you get the option to configure the specs of the server through the 'instance-type'. We chose t2-micro. To configure the SSH access to the cloud we used PuTTY. It can be downloaded from https://www.putty.org/. From here on we have to configure the PuTTY with the .pem key we get from AWS. You can install the packages you need with debian commands through the shell.
cd internshipProj
python manage.py runserverLastly, go to http://127.0.0.1:8000/.
You will be greated by the index page.
From there click on the articles link in the navbar to be able to select from every article present on the goverment site.
They are sorted by date and reach back to around 2017 with about 2300 articles present. Clicking on a row will lead you to that article's page.
The field is split in two. On the first side we can see the article with the title, image and body with all of the entities highlighted. And on the second is a table with the entities sorted by occurences in the article. Clicking on a row here, will lead us to the the page of the specified entity.
On this page is shown the stats for the word amongst all articles and where it is found. Consequently, you can visit the specified article.