Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Design of a Counsellor Chat Bot Service

Info: 8580 words (34 pages) Dissertation
Published: 21st Dec 2022

Reference this

Tagged: Information TechnologyTechnology

ABSTRACT The aim of this project is to build a counsellor chat bot as a service which responds to the person, giving them advice based on the premise given to it.  There are counsellor chatbots present but they are all retrieval based which means that they can only respond to what they have been trained for which doesn’t consider all scenarios and thus doesn’t work for all scenarios. Generative chatbots work based on content classification and thus will be able to work for all scenarios. Lot of information needs to be analyzed to understood which is, what and what all it can mean. The context can have a lot of meanings along with vagueness, incomplete and incorrect data. Can be used by people to chat with or understand their problem or just to talk about when they need a flow of thought.

Introduction

  1. Theoretical Background

Chat-bots are computer programs that interact with users using natural languages. They use natural language processes for interaction.  Chat-bot is a computer program which conducts a conversation via auditory or textual methods. Counselling is a principled relationship characterized by the application of one or more psychological theories and a recognised set of communication skills, modified by experience, intuition and other interpersonal factors, to clients’ intimate concerns, problems or aspirations. Its predominant ethos is one of facilitation rather than of advice-giving or coercion. Counseling psychology is a psychological specialty that encompasses research and applied work in several broad domains: counseling process and outcome; supervision and training; career development and counseling; and prevention and health. Some unifying themes among counseling psychologists include a focus on assets and strengths, person environment interactions, educational and career development, brief interactions, and a focus on intact personalities Counseling is the application of mental health, psychological or human development principles, through cognitive, affective, behavioral or systemic interventions, strategies that address wellness, personal growth, or career development, as well as pathology. Positive psychology is the branch of psychology that uses scientific understanding and effective intervention to aid in the achievement of a positive outlook when it comes to subjective experiences, individual traits, and events that occur throughout one's lifetime. The goal of positive psychology is to step away from the pathological thoughts that may arise in a hopeless mindset, and to instead, maintain a sense of optimism that allows for people to understand what makes life worth living Mental health includes our emotional, psychological, and social well-being. It affects how we think, feel, and act. It also helps determine how we handle stress, relate to others, and make choices. Mental health is important at every stage of life, from childhood and adolescence through adulthood. The initial part which is very important is realizing the part and then as well understanding and knowing why something is having an impact on you. Thus, by this one can know and further accordingly get it corrected or correct it them self accordingly. Maintaining good mental health is crucial to living a long and healthy life. Good mental health can enhance one's life, while poor mental health can prevent someone from living an enriching life.
  1. Motivation

Taking care of the mental health at all times is useful and this can be achieved from this method. It is available everywhere and easily accessible and along with it, it is easy to trust the software as to never break the confidentiality. Thus, it will be better and more efficient than the present method. Making a generative chat-bot will enable the chatbot to give counselling in all scenarios.
  1. Aim of the proposed work

Development of counsellor chat bot as a service which responds to the person giving advice based on the premise given to it. The aim of this project is to build an virtual AI that is capable of understanding the sentiment of the conversation. This will benefit people as they will be able to check their mental health whenever they want and get appropriate suggestions or just to keep the flow of the conversation.
  1. Objective(s) of the proposed work

The project aims to build a generative chat bot that is text-based and is not just limited to the data that it is trained on but can also learn based on all the data that it interacts with and as well generate responses based on what it learns. The complexities lie in developing a domain specific chat bot that will provide valuable replies by understanding the conversation and asking the relevant questions. This work aims to help people by interacting with them and keeping a check on their mental health as and when the person interacts with the bot. It is easy to interact with the bot as it is accessible everywhere and all the time. Should be able to figure out the problem and prompt and get more relevant information about the user. Help the user into understanding the problem to a better extent or get to know about it and apart from it give solutions.

Literature Survey

  1. Survey of the Existing Models/Work

Eliza – Eliza simulated conversation by using a pattern matching and substitution methodology that reposed to the user based on matching the scenario to its pre-trained scenarios. It gave users an illusion of understanding on the part of the program, but had no built in framework for contextualizing events. Brisbot: Brisbot is a chat bot for giving advice directly. Brisbot is loaded with hundreds of questions from kids, with answers written by counselors. Mitsuku - Mitsuku is a chatbot created using AIML. It learns on its own and thus can respond to things it hasn’t learned on but is limited to the rules and features it has been created with. Mitsuku won the 2013 and 2016 Loebner prize. Joy- Joy is a chat bot which takes care of your mental health. Joy sends daily check-ins, asking how you feel and what you did that day. Joy then uses your response to interpret your mood and respond appropriately. ChatGPT is an OpenAI based model, capable of creating fully researched and referenced written material from a simple or cvoice input. An example of the conversation skills of ChatGPT can be seen in this interview.
  1. Summary/Gaps identified in the Survey

Most of the chatbots are retrieval based chat-bots so it can only respond to the things it knows about and it works on phrases so it can’t match anything beyond that. It has to be updated manually and can’t do it itself. Xen will be made particularly for managing the mental health and have its own intent classification based on which it shall make its suggestions and thus will be able to be updated and focused on certain areas. Eliza - Eliza gave users an illusion of understanding on the part of the program, but had no built in framework for contextualizing events. Brisbot - Brisbot does not respond to open questions, instead the user will navigate through pre-programmed professional advice. Mitsuki - Mitsuki updates itself but is created with a generic objective of having a conversation and is limited to the rules and features it has been created with using AIML. Joy -Joy uses a generic corpus provided by IBM Watson and Microsoft LUIS to understand the intent and give responses.

Overview of the Proposed Systems

  1. Introduction and Related Concepts

Chatbot is an automated system which responds to pre-set conditions or based on the response or question which is provided to the chat-bot. A counsellor helps understand that if any mental health issue is present or troubling the person, help him understand more about it and improve the mental health of the person in a way if it can. Retrieval-based models: These models are easy to understand and develop. To train them, a predefined set of queries and responses are fed into the system. This works well for a closed domain and the chat bot eventually becomes effective at context and content understanding. These models are incapable of generating any new text or content. In essence, they select a definite text from a predefined set of texts and use the set that best serves the intent of the question. These models serve well for chat bots that perform a specific task. Generative models: These models have no defined classes and self-generate entire answers of their own based on their understanding of a giving sentence. They generate entirely new responses from scratch. Domains: In an open domain setting the user can take the conversation anywhere. There isn’t necessarily have a well-defined goal or intention. Conversations on social media sites like Messenger and Telegram are typically open domain. The infinite number of topics and the fact that a certain amount of world knowledge is required to create reasonable responses. In a closed domain setting the space of possible inputs and outputs is somewhat limited because the system is trying to achieve a very specific goal. Types of Generative Chat bots: Generative Adversarial Networks (GANs), the training process as a game between two separate networks: a generator network and a second discriminative network that tries to classify samples as either coming from the true distribution or the model distribution. The discriminator notices a difference between the two distributions, the generator adjusts its parameters slightly to make it go away, until at the end the generator exactly reproduces the true data distribution and the discriminator is unable to find a difference. Variational Autoencoders (VAEs) allow us to formalize training data in the framework of probabilistic graphical models where we are maximizing a lower bound on the log likelihood of the data. Autoregressive models train a network that models the conditional distribution of every individual condition. Sentiment Analysis: Language is an easy, natural way for humans to express themselves. Sentiment Analysis or opinion mining is the process of understanding which sentiment is being conveyed through text and take the context with respect to that. The five broad categories of sentiment are:
  1. Happy
  2. Surprise
  3. Sadness
  4. Contempt
  5. Disgust
  6. Anger
Context Classification Context model is that classification judgments are based on the retrieval of stored exemplar information. Specifically, we assume that a probe stimulus functions as a retrieval cue to access information stored with stimuli similar to the probe. This mechanism is, in a sense, a device for reasoning by analogy inasmuch as classification of new stimuli is based on stored information concerning old exemplars, but one should also note its similarity to contemporary memory models.
  1.  Framework, Architecture or Module for the Proposed System (with explanation)

Corpus: Text corpus is a large and structured set of texts. Anc.org – Open American Corpus – Chat corpus theinkblot.com -  Rorschach Test corpus thecrisistext.com - Counsellor corpus AIML - ALICE – Psychiatrist and Therapist corpus NLTK – Eliza – Chat corpus NLTK- Python Natural Language Toolkit. It provides suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries. Scikit-learn : It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. Tensorflow  is an open source software library for machine learning. It is used for systems capable of building and training neural networks to detect and decipher patterns and correlations, analogous to the learning and reasoning which humans use. Pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series. Matplotlib - Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Numpy - NumPy is the fundamental package for scientific computing with Python. RE – Regular Expressions Sqllite3 - SQLite is a self-contained, high-reliability, embedded, full-featured, public-domain, SQL database engine. Models - PyModels is a lightweight framework for mapping Python classes to schema-less databases. Psycopg2: PostgreSQL , is an object-relational database with an emphasis on extensibility and standards compliance. Pickle is the standard mechanism for object serialization. Pickle uses a simple stack-based virtual machine that records the instructions used to reconstruct the object. Virtualenv – Virtualenv is a tool to create isolated Python environments and with this the programs can be hosted with all their dependencies. Postman - The Postman Rest Client is a HTTP Request composer that makes it easy to call web services. It provides as an alternative for autogenerating request calls that makes it easier to call existing services but does require users to install the Postman Rest Client. CUDA –  is a parallel computing platform and application programming interface which uses NVIDEA GPU’s for computation. Telegram API - This API allows us to build customized Telegram clients. It is  open source. Messenger API – This API allows us to build customized Messenger clients. NGROK- ngrok secure introspectable tunnels to localhost webhook development tool and debugging tool. LUIS - Language Understanding Intelligent Service. LUIS is designed to enable developers to build smart applications that can understand human language and accordingly react to user requests. LUIS can interpret conversations in terms of their intents and entities. Watson - Watson is a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering. Architecture The chatbot flow can be divided into 3 flows. The message triggers which are the Intro which happens when there is a starting of the chat or after a gap. The next one is System where the chatbot wants to send a message and the last one is User where a response is given when the user completes a task. The next is Message flows which defines how a conversation flows after the starting of the conversation. The last is Message Copy Matrix which is basically interconnected flows of the messages which may lead from one to another or showing the ends of the flows.

https://lh3.googleusercontent.com/7nE-nH0j2WZqGkVIW6tFI3sCE228ugLrQYeUUf3idJRm_Ut-UTQ-fPdXVqO8BeEazRkt7W_uAmOFVdHT7UDQzomzxJ2HU8m3xiJM3WY6AjNNFdQ_44ItmiLbGCn_B4cTM6FyZlgJhttps://lh6.googleusercontent.com/4hna7vWJJcBdyVqgXwUncvVnRdRKDhdkSvSkbS4tFv8JkFy_9PJcShIQFNW8mn-u8YKXn1OX9b6TUK9jp7hPd9bCZgWZjS8jKIeF0b22oJdF3t30vrsJFCTKfERlIrMOTc32PuQ_

 

There are two types of flows. Linear flows are when the conversation goes in the same direction irrespective or the response which are used while gathering the initial idea and gathering the data before it can be analyzed. The other part is conditional flows where the conversation flows according to the response. https://lh5.googleusercontent.com/zUtMYWn1bCQTwJxvS0c_-MWjdR2UMpbGciQk337ydCP38tNyuzww6ogAtmO6T5S3RclKKAgueYLKfX8lwuK_HNl4LszaF_UZJ-8uXaRwxOCO15gRMH38wMwDqhC9K2z0E3r_u5fN

Content Modules are the various functions which are applied on the text to modify and gather more data from it along with adjust the responses in the manner to give an overall better output. Sentiment Analysis or Subjectivity or Polarity Analysis or Summarizer are few of the types.

https://lh4.googleusercontent.com/i4TXRg7hlrG6iYbeY5PPnRXcM69D8QGWTVx60nuHq1jZ9ZazOacujFGl_YtLC0JQKCgwENZiaUBWIviEsy_O-EpLOsbeSJrASeOUZ2wnDy8qWGbgG4Quf7W2VwBht8z-FUU5VQR3

Content Classification Context classification judgments are based on the retrieval of stored exemplar information. Specifically, we assume that a probe stimulus functions as a retrieval cue to access information stored with stimuli similar to the probe. This mechanism is, in a sense, a device for reasoning by analogy inasmuch as classification of new stimuli is based on stored information concerning old exemplars, but one should also note its similarity to contemporary memory models. Domain Classifier classifies input into one of a pre-defined set of conversational domains. Only necessary for apps that handle conversations across varied topics, each with its own specialized vocabulary. Intent Classifiers determine what the user is trying to accomplish by assigning each input to one of the intents defined for your application. Entity Recognizers extract the words and phrases, or entities, that are required to fulfill the user’s end goal. Role Classifiers assign a differentiating label, called a role, to the extracted entities. This level of categorization is only necessary where an entity of a particular type can have multiple meanings depending on the context. NER – Named Entity Recognizer. Standford NLP or Microsoft LUIS can be used for it. Mixed Initiative - It is that the conversation can be started either by the chat-bot or the user and as well continue based on that and thus have a two-way flow of the conversation. The text is processed by a generative chat bot and a reply is generated based on this inputted text. In parallel, a sentiment/emotion recognition engine extracts the sentiment by processing the text. The same text acts as an input to the chat bot and a response is generated based on the text input to the chat bot engine Modules General Chat-bot : Where the user can chat generally with the bot and have a casual conversation which will continue in the direction to get to know the user better. Summarizer : Shortening to the point that the user wants to make from the whole conversation with the user and give normalized results so that the flow of the conversation if not influenced too much based on how much content has been given on a particular topic by the user. Sentiment Analysis : Language is an easy, natural way for humans to express themselves. Sentiment Analysis or opinion mining is the process of understanding which sentiment is being conveyed through text and take the context with respect to that. Rorschach Test : Get the response for each image in the test from the user and understand the meaning and continue the conversation from that point based on the aspect each image represents. Conclusion : Give a conclusion if the chat-bot reaches any about the person and accordingly suggest remedies. Self-Learning : The conversations done with the bot are analysed and they are thereafter saved in the database from where they are used by the bot to converse. The basic version where there will be only one bot analysing and based on that sentence will be giving the response and this type of version will need internet connection so that it can analyse the given query and responds and will also have a ranking system. It will be upgraded in the manner that other bots will be added which will think of the mood of the conversation and as well about the whole topic rather than the last sentence. It will further be upgraded to become a stand-alone software and improve its ranking and have login pages for databases for the people who chat often so that the chat-bot becomes accustomed to them and give better responses. As the last stage it will become an integration of any software like digital assistants, dragon and improve their efficiency or engines.

Auto Talk Engine This engine makes your chat-bot respond automatically if the user hasn't chatted in a specific period of time. You can also improve this engine by making your chat-bot wait if the user is typing a message. Assume that two minutes is the defined period of time before this engine works. If the time is up, your chat-bot sends a response from the Auto Talk Database.

  1.  Proposed System Model(ER Diagram/UML Diagram/Mathematical Modeling)

This chat-bot able to understand the mood and respond in respond to that and understand the whole conversation and continue the flow. Instead of building a chatbot that is hard-coded with a set of responses, the proposed model will be capable of generating responses that it has never seen before. Instead of just identifying the context, intents and entities and using these to structure a response manually, it is capable of using sequential neural networks and deep learning to identify context and respond accordingly to the text input to it

.

Internal software data structure The software will have a database which contains a column degree of usefulness, and phrase set, which will be separated from each other by semi-colons. The information will be mined from this database at a degree of similarity and it will not be case-sensitive until and unless it is not specified beforehand. This database can be updated. Global data structure The information will be scraped from the internet and will be ranked and checked as to how useful it is for the conversation going on. With the help of this database the software can discuss topics which it has no prior knowledge about and thus, it broadens the scope of the software Temporary data structure This database stores information from the present conversation going on so that it can change itself for the better experience of the user. This database will be discarded at the end of the session but in the later parts of the software, i.e. after the login page is implemented, all this data will be stored on the cloud so that the software changes itself and reacts differently for every user and gives them a better experience.

Proposed System Analysis and Design

  1. Introduction

The system is a chat-bot which when conversed by the person summarizes what the person is trying to convey and based on sentiment analysis and intent classification on it the features of the user are given attributes and questions are asked based on this for the bot to gather more information and the user to understand more about the situation and get a holistic approach about it when it is being explained to the bot. Clustering:  Groups like documents together into sets of documents Latent Semantic Indexing : Extracts important words/phrases that occur in conjunction with each other in the text We use supervised machine learning for a number of natural language processing tasks, including the following things: Tokenization is the task of chopping sentences up into pieces, called tokens i.e words and dropping off punctuations and so on. Part of Speech tagging : It is tagging each word based on parts of speech so that it can be used for further processing. Named Entity Recognition : It is used to tag words based on the pre-defined categories. Sentiment Analysis: Language is an easy, natural way for humans to express themselves. Sentiment Analysis or opinion mining is the process of understanding which sentiment is being conveyed through text and take the context with respect to that. Thought Vectors: Encode a sentence to predict the sentences around it. Thus, any composition operator can be substituted as a sentence encoder and only the objective function becomes modified. picture1 RNN: Recurrent Neural Networks use sequential information. They want to predict the next word in a sentence you better know which words came before it. They perform the same task for every element of a sequence, with the output being depended on the previous computations. LSTM: LSTM networks have memory blocks that are connected through layers. A block has components that make it smarter than a classical neuron and a memory for recent sequences. A block contains gates that manage the block’s state and output. A block operates upon an input sequence and each gate within a block uses the sigmoid activation units to control whether they are triggered or not, making the change of state and addition of information flowing through the block conditional. Each unit is like a mini-state machine where the gates of the units have weights that are learned during the training procedure. AIML: AIML is Artificial Intelligence Modelling Language. It is an XML based markup language. The basic unit of knowledge in AIML is called a category. Each category consists of an input question, an output answer, and an optional context. The question, or stimulus, is called the pattern. The answer, or response, is called the template. The two types of optional context are called "that" and “topic." The AIML pattern language is simple, consisting only of words, spaces, and the wildcard symbols _ and *. The words may consist of letters and numerals, but no other characters. The pattern language is case invariant. Words are separated by a single space, and the wildcard characters’ function like words. AIML tags transform the reply into a mini computer program which can save data, activate other programs, give conditional responses, and recursively call the pattern matcher to insert the responses from other categories. Most AIML tags in fact belong to this template side sublanguage. AIML currently supports two ways to interface other languages and systems. The tag executes any program accessible as an operating system shell command, and inserts the results in the reply. Similarly, the tag allows arbitrary scripting inside the templates. The optional context portion of the category consists of two variants, called and . The tag appears inside the category, and its pattern must match the robot’s last utterance. Remembering one last utterance is important if the robot asks a question. The tag appears outside the category, and collects a group of categories together. The topic may be set inside any template. AIML is not exactly the same as a simple database of questions and answers. The pattern matching "query" language is much simpler than something like SQL. But a category template may contain the recursive tag, so that the output depends not only on one matched category, but also any others recursively reached through .
  1. Requirement Analysis

    1.    Functional Requirements
      1.          Product Perspective
The chat-bot will be used to continue the flow of thinking of a person conversing so as to make the flow of thoughts easier and it can also be used by understand the mood or the topic which is being hinted at directly or indirectly rather than at just that instant in digital assistants. The product is supposed to help people chat at any mo3ment or have their mental health analyzed when they want and accordingly understand more about what it is causing and what is causing it. • User interfaces The user interface includes messenger or telegram API which acts with the software with the help of a webhook. • Software interfaces The system currently runs on messenger or telegram or on any system with python. • Communications interfaces Communication between Telegram or Messenger occurs with the help of API which can be used through any system using ngrok. • Operations A TensorFlow library seq2seq is used. It helps in identifying the content that can subsequently be used to train the chatbot.
  1.          Product features
The bot will as well be able to chat normally if any want wants to chat on any topic so that they can continue their thought process in a manner and apart from that the bot can be interacted so that a person can explore their mental condition and if possible as well find the cause for it and be able to understand more about their own standpoint on it. The chat-bot will be used to continue the flow of thinking of a person conversing so as to make the flow of thoughts easier and it can also be used by other machines to understand the mood or the topic which is being hinted at directly rather than just at the instant in digital assistants. The chatbot will be able to engage the user in a conversation of their choice.It will be able to make the user aware of its emotions by changing the expression of the bot while conversing. The expressions given in response to the user’s query will be realistic and worldly.
  1.          User characteristics
People checking the level of understanding of the program- Turing Test These people will be checking the level of understanding of the chat-bot and see how well it can understand and interact like humans. People seeing extended possibilities of their thoughts People talking about random things because they wish to see what all can be expected from there thought process when standard questions are asked about it and see how their thought process goes with it. Digital Assistants They assist in our daily live activities and if they understand what the person want’s better they shall become better at what they do. People learning the language The program will consider that these people are not fluent in the language and understand them appropriately.
  1.          Assumption & Dependencies
Chat-bots are computer programs that interact with users using natural languages. They use natural language processes for interaction.  Chat-bot is a computer program which conducts a conversation. Chat-bots are also used in dialog systems for various practical purposes including customer service or information acquisition. Some chat-bots use sophisticated natural language processing systems, but many simply scan for keywords within the input and pull a reply with the most matching keywords, or the most similar wording pattern, from a textual database. The chat-bot being made will be able to understand the mood and respond in respect to that and understand the whole conversation. The person is fluent to a certain extent in the language and doesn’t jump from thoughts, goes in a flow and talks about one thing at a time and doesn’t continue talking about multiple thoughts at the same time. The user will try having a meaningful conversation and not be lashing out or talking about random things. Only one person will be conversing at one point of time. Fluency of the person interacting will be a major limiting constraint as the software will take time to predict the fluency of the person and interact in that manner. Every person’s vocabulary is different and thus, there may be some phrases which people use which won’t be common and thus, the software won’t know about these phrases, and as well for certain words. When speaking referentially we are very vague and thus, the software may not be able to understand the topic. Subjectivity defines the sense in which people talk and can change the whole meaning of the conversation and if it is lingo or any one which the software hasn’t been programmed for the software will react in an unpredictable fashion. Self Learning will be used in the manner of modelling so when the software has been interacting with a specific type of person a lot it will converse better with that person and when any other person uses the software who represents the phrases used in a different manner the software will react in the manner it would react to the first person and thus, will not respond correctly. Later modules will have login feature where a person can interact and a specific area of the database will be used to store the logs of that specific person thus, they do not affect any other person using the software.
  1.          Domain Requirements
Chatbot being used for self learning and to understand different scenarios. The chatbot should be able to remember the context of an ongoing conversation. It should be able to remember the interests of a user from their past conversations. An ongoing conversation should be domain specific, i.e. the chatbot should not deviate from the topic. The chatbot should have the ability to have a meaningful conversation with the user.
  1.          User Requirements
For the user to continue the conversation and give enough information to the bot and be true about the information and not mislead the bot. The software should remember the context of the conversation it has with the user.
  1.    Non Functional Requirements

    1.          Product Requirements
      1.                Efficiency (in terms of Time and Space)
Clustering depends on the features that are incorporated and inbuilt libraries are used for the implementation of each individual algorithm. The time complexity for training the neural network is O(n^2). For testing the time complexity is O(n*m) where n is the number or sentences and m determines the number of words in every sentence. Also, the time complexity when the user uses the system will also be O(n*m) with the same parameters as above.
  1.                Reliability
The bot will perform the same in the same condition but each condition is never the same and the bot will need to be trained to find the finer differences but have a limitation for overfitting and features.
  1.                Portability
The system is hosted on a server and can be used anywhere on a mobile or desktop as a web app.
  1.                Usability
Based on the requirements of the users and is going to be helpful in daily life to improve the living standards of people.
  1.          Organizational Requirements
    1.                Implementation Requirements (in terms of deployment)
To receive an accurate output, a massive data set needs to be trained. Since, the objective is to create a chat bot, we need to train it in such a way that its vocabulary is up to the standard of the end user.  The quality of the data set will have a major impact on the acceptability of the system by the user. Apart from this, the system needs to be constantly tested and validated to avoid any failures. Due to the large amount of data, the overhead is a lot of testing and training this data. This slows the speed of implementation. Performing real-time animation also requires a lot of manual work. It is a constant trial-and-error affair, which eventually leads to an accurate result. Server to host and Telegram API as it is hosted on Telegram
  1.                Engineering Standard Requirements
The system will be able to converse with the user in English via text . It will be able to respond to any query instantly and appropriately. Copies of previous versions need to be kept so that if the bot gets over trained or self-learning is not in the right factor it can be reverted and done again. Changes in the algorithm happened at which instance and what difference would it create if the bot was trained from the starting will need to be considered.
  1.          Operational Requirements (Explain the applicability for your work w.r.to the following operational requirement(s))
  • Economic
The chatbot can help anyone check their mental health at any point of time and thus is useful for everyone.
  • Environmental
The basic use of the environment in the project is that at different situations the person is in a different mindset and it affects his mental health in different manners.
  • Social
The chatbot will be socially aware. It is capable of detecting emotions while conversing with the user, reasoning about how to respond to the intentions behind those particular emotions, and generating appropriate social responses. The chatbot will be trained well enough to be socially responsible. It will never suggest any reckless activity or behavior to the user. The product can be used in various social situations. It is helpful in letting the person stay in a good mental condition and explore more about it at any given moment or condition.
  • Ethical
The chatbot will ensure that private and sensitive information related to any conversation they have had with an individual remains safe since it’s important to maintain the security of the bot’s input and output databases in order to avoid the loss of sensitive information. The chatbot would be ethical at all times and will not instigate the user to involve themselves in any activity that would be against the law.
  • Sustainability
The product can be sustained if all the system capability requirements are met with.
  • Legality
The chatbot doesn’t persuade or suggest to the user to do anything illegal. The chatbot will be accountable for all its actions and suggestions.
  • Inspectability
The system stores the data in encrypted form and moreover a person can have the option of having and loading their own data themselves so the data won’t be there anywhere else.
  1.    System Requirements
    1.          H/W Requirements(details about Application Specific Hardware)
The proposed system is a software system  and can be used on any mobile or pc but requires a computer powerful enough to for training itself. The computer should preferably have a RAM of 8GB or more and an NVIDIA GPU that is capable of running the CUDA commands is preferred.
  1.          S/W Requirements(details about Application Specific Software)
The chat bot is developed in the python programming language, using the Tensorflow library for machine learning and deep learning. The entire list of libraries used and their dependencies are:
  1. Numpy
  2. Pandas
  3. Tensorflow
  4. Matplotlib
  5. Sqlite3
  6. Aiml
  7. Nltk
  8. Virtualenv
The other softwares that are required are:
  1. Telegram or Messenger Webhook
  2. NGROK
  3. CUDA
  1. Implementation
    1. Methodology with Pseudo code

Incorporated context: To produce sensible responses systems may need to incorporate both linguistic context and physical context. In long dialogs people keep track of what has been said and what information has been exchanged. The approach is to embed the conversation into a vector. Understanding what is important and continuing the conversation in that manner and to gain more information. Personalized responses which means that there aren’t a set of responses so everyone doesn’t continue getting the same response in a similar situation. The responses depend on a lot more factors and are generated based on those factors. Coherent: Finding what is important and asking the right questions so as to approach the right direction. Evaluation of models to understand if the conversational agent is to measure whether or not it is fulfilling its task and thus be modified accordingly. As well finding the boundary values where the chatbot works and fails and the closeness of these models to the real world. Classification of the models into similar categories where the relation between them can be seen and understanding if one category intersects with another and how much of it flows in the other thus affecting it and making it another aspect which will need to be considered. The sentiment of the user will need to be considered so that the user is comfortable while interacting to the chatbot and is able to do it with ease.

Conversational Chatbot:

seq2seq

At the base of the generative model, likes a seq2seq model that is capable of taking a sentence as an input and generating another sentence as a response. The Seq2Seq model has two primary layers: the encoder and the decoder. The encoder is comprised of several layers of left-stack LSTMs. The decoder side has several layers of right-stacked LSTMs. The purpose of the encoder is to encode the input sequence (context vector). This context vector is used as an input by the decoder to generate an output sequence. Bot.py import pandas as pd import numpy as np from matplotlib.pyplot import  ion , draw import matplotlib.pyplot as plt import matplotlib.image as im import sys import os import re import sqlite3 from collections import Counter from string import punctuation from math import sqrt from nltk.chat import eliza import nltk from nltk import  * from login import log ion() x = log() if x == 1: path = "/home/gifty/PycharmProjects/bot/Pics" pics = os.listdir("/home/gifty/PycharmProjects/bot/Pics") for i in pics: img  =  im.imread(path+'/'+str(i)) x = plt.imshow(img) plt.draw() print ("Please enter the image description according to you :") print("Therapist\n---------") print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation.  Enter "quit" when done.') print('=' * 72) print("Hello.  How are you feeling today? Can you explain me about the image?") execfile('brain.py') else : print "Reload" Brain.py import aiml import sys import os import re from collections import Counter from string import punctuation from math import sqrt import random from nltk import compat import sqlite3 conn = sqlite3.connect('counselorchatsqlite.db') cursor = conn.cursor() os.chdir('/home/gifty/PycharmProjects/bot') def get_words(self, text): wordsRegexpString = '(?:\w+|[' + re.escape(punctuation) + ']+)' wordsRegexp = re.compile(wordsRegexpString) wordsList = wordsRegexp.findall(text.lower()) return Counter(wordsList).items() def get_id(self, entityName, text): tableName = entityName + 's' columnName = entityName cursor.execute('SELECT rowid FROM ' + tableName + ' WHERE ' + columnName + ' = ?', (text,)) row = cursor.fetchone() if row: return row[0] else: cursor.execute('INSERT INTO ' + tableName + ' (' + columnName + ') VALUES (?)', (text,)) return cursor.lastrowid kernel = aiml.Kernel() kernel.setBotPredicate("name","Xen") kernel.learn("std-startup.xml") kernel.respond("LOAD AIML B") while True: input = raw_input(">> User :") if input == "quit": exit() else: bot_response = kernel.respond(input) # Do something with bot_response res = bot_response words = get_words(input) words_length = sum([n * len(word) for word, n in words]) sentence_id = get_id('sentence', res) for word, n in words: word_id = get_id('word', word) weight = sqrt(n / float(words_length)) cursor.execute('INSERT INTO associations VALUES (?, ?, ?)', (word_id, sentence_id, weight)) conn.commit() cursor.execute('CREATE TEMPORARY TABLE results(sentence_id INT, sentence TEXT, weight REAL)') words = get_words(res) words_length = sum([n * len(word) for word, n in words]) for word, n in words: weight = sqrt(n / float(words_length)) cursor.execute( 'INSERT INTO results SELECT associations.sentence_id, sentences.sentence, ?*associations.weight/(4+sentences.used) FROM words INNER JOIN associations ON associations.word_id=words.rowid INNER JOIN sentences ON sentences.rowid=associations.sentence_id WHERE words.word=?', (weight, word,)) cursor.execute( 'SELECT sentence_id, sentence, SUM(weight) AS sum_weight FROM results GROUP BY sentence_id ORDER BY sum_weight DESC LIMIT 1') row = cursor.fetchone() cursor.execute('DROP TABLE results') if row is None: cursor.execute( 'SELECT rowid, sentence FROM sentences WHERE used = (SELECT MIN(used) FROM sentences) ORDER BY RANDOM() LIMIT 1') row = cursor.fetchone() cursor.execute('UPDATE sentences SET used=used+1 WHERE rowid=?', (row[0],)) print(">> Xen :"+res) Login.py import sqlite3 def log(): conn = sqlite3.connect('counselorchatsqlite.db') cursor = conn.cursor() sql = "SELECT display_name FROM users WHERE email='%s' and password='%s'" %(raw_input("Email : "),raw_input("Password : ")) try: tmp = cursor.execute(sql).fetchall() if tmp: x = 1 else : print "Error: Sign Up" conn = sqlite3.connect('counselorchatsqlite.db') cursor = conn.cursor() sql = "INSERT INTO users(email, display_name, password) \ VALUES ('%s', '%s', '%s' )" % \ (raw_input("Enter email:"), raw_input("Enter User Name : "), raw_input("Enter Password : ")) try: cursor.execute(sql) conn.commit() except: conn.rollback() conn.close() x = 1 except: print "Error: Sign Up" conn = sqlite3.connect('counselorchatsqlite.db') cursor = conn.cursor() sql = "INSERT INTO users(email, display_name, password) \ VALUES ('%s', '%s', '%s' )" % \ (raw_input("Enter email:"), raw_input("Enter User Name : "), raw_input("Enter Password : ")) try: cursor.execute(sql) conn.commit() except: conn.rollback() conn.close() x = 1 return x if __name__=='__main__': print log() Sign_Up.py import sqlite3 from login import log import bot x = log() if x == 0: conn = sqlite3.connect('counselorchatsqlite.db') cursor = conn.cursor() sql = "INSERT INTO Users(email, display_name, password) \ VALUES ('%s', '%s', '%s' )" % \ (raw_input("Enter email:"), raw_input("Enter User Name : "), raw_input("Enter Password : ")) try: cursor.execute(sql) conn.commit() except: conn.rollback() conn.close() else : execfile(bot) def sign_up(): conn = sqlite3.connect('counselorchatsqlite.db') cursor = conn.cursor() sql = "INSERT INTO Users(email, display_name, password) \ VALUES ('%s', '%s', '%s' )" % \ (raw_input("Enter email:"), raw_input("Enter User Name : "), raw_input("Enter Password : ")) try: cursor.execute(sql) conn.commit() except: conn.rollback() conn.close() Datasets: AIML psycology, personality among others. theinkblot.com dataset eliza dataset.

  1. Snapshots
tensorboard_train_loss

Results and Discussion

  1. Testing

Goldberg’s test so as to understand if there was any effect of the bot on the user and to understand what kind of effect. Feedback from the user as to how satisfied or change they experienced. Supervised result so as let professionals judge where all the bot lags and where it excels.
  1. Performance Metrics

The bot can be analyzed based on the feedback gotten from the user. The bot can be analyzed based on testing it against the training data and confirming the output.
  1. Results obtained (Graph if required)

As seen in the screenshots the chatbot can continue the conversation and after learning on its own for a while it will be able to generate its own answers and interpret what the user is trying to say more accurately. The bot leads the conversation in the right direction to get to know more about the user and after being trained on more data it will be able to give an analysis of the mental health of the user as well.x

Conclusion, Limitations and Scope for future Work

Conclusion: It also understands the interests of the end user and tries to keep the user interested in the ongoing conversation. The bot is able to direct the user in the right direction and isolate the problem to a certain extent. Limitations: The bot can be lead astray or fooled easily by the user. The bot doesn’t easily change topics. Requires a lot of data before it can be trained again. The bot can’t answer questions if the user asks it to explain something. Future Work: The categories for classification can be improved using the following methods. Randomforest: Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. A Random Forest consists of a collection or ensemble of simple tree predictors, each capable of producing a response when presented with a set of predictor values. For classification problems, this response takes the form of a class membership, which associates, or classifies, a set of independent predictor values with one of the categories present in the dependent variable. Alternatively, for regression problems, the tree response is an estimate of the dependent variable given the predictors. Xgboost: XGBoost (eXtreme Gradient Boosting) is an advanced implementation of gradient boosting algorithm. The user can be trained with the help of the data it has gathered and reduce the percentage of retrieval responses it sends. The bot can use grammatically correct English while using generative responses.

References

Douglas L. Medin,Marguerite M. Schaffer,"Context Theory of Classification Learning",Psychological Review Ryan Kiros , Yukun Zhu,"Skip Thought Vectors",nips.cc https://www.pandorabots.com Python Documentation

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Technology"

Technology can be described as the use of scientific and advanced knowledge to meet the requirements of humans. Technology is continuously developing, and is used in almost all aspects of life.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: