Any further feedback on how to improve the project?
Guest
2021-01-23 17:26:20
AI has less of the human touch to it and usually the other person has to be able to guess the response that the user wants to be able to relieve their stress and AI does not have the EQ skills for now
Guest
2021-01-23 17:05:52
Does your team have the relevant mental health expertise and AI development background to successfully execute on this? I would imagine that at first it would require a lot of human input, and it’s always difficult for an AI chatbot to understand the nuances in a human’s response—unless you decide to limit it to predetermined responses that the user can choose from. Especially for youths who might not be that articulate or know how to express themselves properly. When it comes to mental health, things can escalate quickly, and not getting proper advice/support could be worse than not getting any support at all, so will you have measures in place to ensure that the chatbot’s responses will be appropriate? Will you have an option to escalate matters to a human?
Guest
2021-01-22 12:50:10
i think the idea is wonderful but i do have some concerns about implementation. it would be really important to ensure that the therapies administered are suitable for the particular individual's context - to ensure this, therapists often (and should) take a comprehensive history before recommending an intervention plan. i worry that asking so many questions through a bot may make people feel like it's too much work or too many questions and they'll give up halfway. i think another really important point to address is that maria needs to be able to flag out when situations are urgent/dangerous and be able to direct people to crisis support. i get that maria is designed for common stressors like anxiety, but what happens if someone says they are going to self-harm, or hurt someone else? there is an ethical concern to this and it really needs to be considered.
Guest
2021-01-22 07:15:20
It is very sensitive how you handle human emotions, and a chatbot may be too simple to detect the tonality and emotions from the conversations. Or you can, but not until you have collected a ton of data....... This requires a long cycle of data collection, deciphering and tweaking to the chatbot before it genuinely becomes useful, I feel. But do try! It would be a nice idea to experiment.
Guest
2021-01-21 15:02:39
The team should engage the advice and support of a mental health professional to guide them in the development of Maria. Could they partner with an organization like Samaritans of Singapore who runs a 24hour crisis hotline? The chats in the crisis hotline may be helpful in the programming of the AI as it shows how humans respond and interact.
Guest
2021-01-21 15:02:38
The team should engage the advice and support of a mental health professional to guide them in the development of Maria. Could they partner with an organization like Samaritans of Singapore who runs a 24hour crisis hotline? The chats in the crisis hotline may be helpful in the programming of the AI as it shows how humans respond and interact.
Guest
2021-01-21 14:09:09
Sounds like a good idea on paper but I question the ability of a chatbot to replace the human connection. Half the time siri & google assistant tells me that they do not understand my request - if this happens to someone who is already upset or depressed, it might be an even greater source of stress.
Guest
2021-01-02 17:26:11
Chatbot is an interesting concept but i believe that for a sensitive issue like discussing mental health, we should not leave it to robots as they might just make things worse. Sometimes things are misinterpreted by chatbots and I am not confident that they would be able to provide comfort (empathy is a human quality!). I would prefer reaching out to human counsellors instead
Guest
2020-12-28 16:03:39
yes, you need to state clearly, how is our data being used? especially names and confidentiality
Contact Details