Evaluation of subjective or text based answers has been a hurdle in the development of eLearning systems since a very long time. The problem related to evaluation of subjective answers is that each student has his/her own way of answering and it is difficult to determine the degree of correctness.
It involves evaluation of grammar and knowledge concepts using the conceived interpretation and creativity of a human mind. The person operating the machine should have the subject knowledge on which the paper is to be evaluated.
Our Proposed System uses machine learning and NLP to solve this problem.
Typically, there are two types of question answering systems:
Closed-domain question answering
- This deals with questions under a specific domain.
- It is harder on the other side since information is not generally available in the public domain.
Open-domain question answering
- This deals with questions about nearly everything.
- These rely only on general ontology and world knowledge
There exist two methods for querying the answer for user questions
- This method requires complex and advanced linguistic analysis programs.
- This method focuses on answer generation by analyzing questions and creating an understanding of the question
- Artificial intelligence approach: This method uses an ontology-based knowledge base technique.
- Statistical techniques: This method considers the similarities in work, sentence length and word order.
The methodologies used in this project are machine learning and NLP
- The software will take a copy of the answer as an input and then after the pre-processing step, it will extract the test of the answer.
- This text will again go through processing to build a model of keywords and feature sets. Model answer sets and keywords categorized as mentioned will be the input as well. The classifier will then, based on the training will give marks to the answers.
- Marks to the answer will be the final output. The answers to all the questions after the extraction would be stored in a database. It brings much transparency to the present method of answer checking.
Did you know
Skyfi Labs helps students learn practical skills by building real-world projects.
You can enrol with friends and receive kits at your doorstep
You can learn from experts, build working projects, showcase skills to the world and grab the best jobs.
Get started today!
The only disadvantage of using computer as evaluation tool is the possibility of failure at the time of submission of answers and lack of technical knowledge provided to the teachers and students
There are three performance evaluation based question answering systems
- Manual evaluations
- Semi-automatic evaluations
- Fully automatic evaluations
- It is to create the data repository of questions and answers as knowledge base.
- To design a method to read answers and extract keywords or features from the answers
- To develop a method/grammar to construct a meaningful statement from generated feature.
- To develop an approach to test the intermediate answers generated with knowledge base.
- It is to allocate marks depending on comparison or testing.
- Py-Charm IDE Community Edition 2018.1.4 x64
- MySQL v5.7.7 or higher
- XAMPP v3.2.2
- Python 2.4 or higher
- Microsoft Windows 10/8/7/Vista/2003/XP.
- RAM: 1gigabyte (GB) (32-bit) or 2 GB (64-bit).
- Processor: 1 gigahertz (GHz) or faster.
- Hard Disk: 5 GB hard disk space+ at least 1GB for caches.
- Display Resolution: 1024*768 resolution is Minimum
Considering all approaches and downfalls in each approach, the different parameters extracted and proposed an approach to evaluate short answers for descriptive subjects automatically
Latest projects on Machine Learning
Want to develop practical skills on Machine Learning? Checkout our latest projects and start learning for free
Kit required to develop Automatic answer evaluation machine:
Technologies you will learn by working on Automatic answer evaluation machine:
Automatic answer evaluation machine