By clicking “Accept All”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts
These cookies enable our website and App to remember things such as your region or country, language, accessibility options and your preferences and settings.
Analytic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Thank you for your feedback. Our bot trains on your website data and we have a knowledge base section within the bot where you can add manual information which is not on your website that can also learn from. Not only this it also can be trained on pdf or docs.Welcome to UKBF @Quickbots
Not yet seen a chat bot in any flavour that was of any use. This weekend I had a technical question about a product and the chatbot couldn’t answer the question. It just spewed out a load of guff.
Just had a look at yours and it asks for my name and email so immediately got rejected.
That's the biggest problem with LLMs at the moment - they make stuff up, and so you can't rely on them being honest. If you can't trust the truthfulness of the answers then they're as good as useless.Not yet seen a chat bot in any flavour that was of any use. This weekend I had a technical question about a product and the chatbot couldn’t answer the question. It just spewed out a load of guff.
Current LLMs pretty much never say "I don't know" they make up lies instead as they don't really have a concept of truth.
My experience has been that they are mostly factually correct but in a woolly, imprecise kind of way.That's the biggest problem with LLMs at the moment - they make stuff up, and so you can't rely on them being honest. If you can't trust the truthfulness of the answers then they're as good as useless.
However accuracy is increasing a lot and if these are trained on actual data then they might be good?
Current LLMs pretty much never say "I don't know" they make up lies instead as they don't really have a concept of truth.
Paul.
Just had this response from a bot:Current LLMs pretty much never say "I don't know" they make up lies instead as they don't really have a concept of truth.
Sorry, I do not understand, please contact us via email or completing the online Contact Form.
Probably because it was prompted specifically to say that in that situation.Just had this response from a bot:
"Sorry, I do not understand"
Thank you i had a look and now i have improved to give better response based on your 2 questions. So it has room to improve and be tweaked. Looking at overall list of questions it did respond well apart from 2-3 questions. Thank you for the feedback.Look at the answers I got from the chat I’ve just had. They aren’t very good and mostly irrelevant to the question.
The bot is only trained on what it is fed. So if trained on your website it is already public knowledge and if you want to add further documents again these will be for public knowledge. It cannot access databases at the moment. It is a great tool for training on business processes and general customer support questions such as product returns or policies. We are looking at how we can incorporate certain CMS so it can interrogate and ofcourse data protection will be forefront with relevant steps taken.And how do you deal with data protection and security?
How do you stop an unauthorised customer services person from getting access to accounts or personal data?
No, they're not Mark.My experience has been that they are mostly factually correct but in a woolly, imprecise kind of way.
It all comes down to your prompt. You have to tell it in baby language therefore need to give it enough information as possible so it can respond more precisely. Again different types of language models using various algorithms. Perfecting prompts will improve your answers.No, they're not Mark.
ChatGPT and Google Bard regularly provide completely wrong answers on any topic.
I asked Bard to give me the summary of a court case and it completely made it up. Nothing to do with the case I gave it whatsoever.
Same thing with ChatGPT.
Sometimes they just make stuff up.
I don't accept that premise.It all comes down to your prompt. You have to tell it in baby language therefore need to give it enough information as possible so it can respond more precisely. Again different types of language models using various algorithms. Perfecting prompts will improve your answers.
It all comes down to your prompt. You have to tell it in baby language therefore need to give it enough information as possible so it can respond more precisely. Again different types of language models using various algorithms. Perfecting prompts will improve your answers.
I think that's a large part of the problem.Thank you for your feedback. Our bot trains on your website data and we have a knowledge base section within the bot where you can add manual information which is not on your website that can also learn from. Not only this it also can be trained on pdf or docs.
Yes if you had a definitions file with the methodology/ calculation or you can add it in the knowledgebase and it would pull this through for the customer querying. You can also tweak the output by querying the bot and then implementing the answer in the backend.I had a question today about the use of Dobânda Anuală Efectivă (DAE) in calculating repayments. DAE is referred to on the website but only as an option in the settings. What it doesn't do is explain how DAE is calculated. Which means the chatbot wouldn't be able to answer the question. But if a human was available the visitor would get the answer they wanted and be more likely to convert. Which means I really want your chatbot to ask me for help rather than just post guff.
I am not sure what sanction they received but it will not have been comfortable.
Being 'spoken to' by a judge is never an easy situation. It happeend to me once 'in chambers' so the only witness was the prosecution lawyer. But it was not comfortable! I wonder what their bosses said - I think the firm they worked for was also fined, and the judge made a point of mentioning the paucity of technological research materials at their firm. I looked it up after my previous post.It appears they got away rather lightly, the two lawyers involved were given a joint fine of $5000.