dc.description.abstract |
Artificial intelligence has several uses, one of which is chatbot technology. It is a computer
program that allows human like conversation between users and computers or other
communication devices using natural language. Chatbot technology has advanced to a new level
thanks to the application of machine learning algorithms and natural language processing methods.
Chatbots can be used in different areas, i.e., business, health, and education. Language learning is
one application area for chatbots. One of the applications of chatbots is language learning. Using
chatbots for language learning has several benefits. For example, chatbots provide continuous
practice and engagement for the learners; the application is available for the learners 24/7; and
chatbots provide a personalized learning experience for the learners. Deep learning and NLP are
the two emerging technologies to develop effective and efficient language-learning chatbots. For
the benefits of applying chatbot technologies to language learning and the adoption of deep
learning and NLP techniques, we have investigated and developed the basic Afaan Oromoo
grammar learning chatbot. We started by collecting data from sources that we believed were
important for the purpose. Then, after the collection of the data, we converted it to a semi structured format, which is the JSON data format. To remove inconsistency, incompleteness, and
normalization processes, we have applied NLP techniques to the dataset we prepared. After
preprocessing our data, the task that follows is text feature generation, i.e., our deep learning model
will understand what we input for training purposes. We have used the Bag of Words (BOW)
technique for feature extraction. The model we have used is the RNN (LSTM), and we created our
model and performed the prediction task. Finally, we have used different metrics to evaluate the
performance of our model. One of the metrics we have used is training accuracy, and the result
showed about 88% of it for the final five epochs. The other metric we have used is training loss,
and the result showed about 35% of it for the final five epochs. |
en_US |