> I think most people I know offline hate interacting with chat bots as products
It's hilarious to me when people are bringing back chat bots as a concept.
We had chat bots a few years ago and it was something that almost all larger companies had built strategies around. The idea being that they could significantly reduce call centre staff and improve customer experience.
And it wasn't just that the quality of the conversations were poor it was that for many users it's about being the human connection of being listened to that is important. Not just getting an answer to their problem.
No, it's that chat bots have no actual power to fix most issues. They exist to make it more difficult to escalate up the bureaucracy where there is staff that can actually problem solve, issue refunds, give credits, etc. Chat bots are merely a filter to get rid of easily pacified pushover customers and those who refuse to read instructions or documentation.
And the ones that can do something would be better served as a simple form.
My state has used a chatbot for car registration renewals for years. It works just fine, I can't truly complain, but it's literally just a higher friction way to fill in a short form. Why did it need to be a chatbot?
The usual explanation, true or not, is that someone was selling the state a chatbot as the modern solution and since the buyer isn't spending their own money they will happily buy it without thinking if it's useful.
Yep, if a chatbot could do all those things then I'd honestly rather use that chatbot than talk to a human, unless I had some very specific concern. But it seems that GPT models can understand other humans just fine.
I recently un-forked a repository on Github. This is not something there is UI for, you need to go through customer service. That was mostly a chat bot. Since I felt that if anyone has a bot that is able to actually do something, it's Github, so I went with it and my repo was un-forked right away. I think there was a person involved but that felt mostly like a screening process. And for that I agree: I like the human touch when I order, say, coffee. Not when I just need to get something done.
Phone queues are frustrating. But I'm not there to get mindless text generated back at me. I've already experienced it. Reminds me of all the dystopian art trying to depict the human despair and powerlessness of facing the system that society has created.
They don't get stuff done, and they never will. Have you never experienced a call tree? They're universally useless except for getting you to a human in the approximate right department. An LLM chatbot is just a call tree that says sorry to you if you swear at it.
And it's not supposed to solve your problems. Solving your problems costs investors money. It's supposed to make you go away.
Recently I wanted to file a chargeback for something that was not delivered. The "dispute this transaction" chatbot told me this scenario (actual dispute) is not in the call tree - contact customer support, because it only knows all the different ways your dispute might not be a real dispute so they don't have to process it (e.g. kids used the credit card). The customer support chatbot told me to go to the transaction page and click on "dispute this transaction". The only way to actually file it was to find the magic incantation to talk to a human. And no, "talk to human" doesn't work. It just gives you a blurb about using the chatbot more effectively.
> They don't get stuff done, and they never will. Have you never experienced a call tree? They're universally useless except for getting you to a human in the approximate right department. An LLM chatbot is just a call tree that says sorry to you if you swear at it.
Sorry, this is entirely incorrect, for many reasons, not the least of which concerns your universality in extrapolating your experiences to everyone else: "they never will," well, "never" is a long time; "universally useless," obviously not universally useless, if at least some people find use from them; "An LLM chatbot is just a call tree that says sorry to you if you swear at it," incorrect entirely, which belies your misunderstanding of what LLMs actually do and behave like.
I recently had to have an Amazon order refunded, and it happened entirely though a chatbot multiple choice tree, and it wasn't even an LLM, just a dialogue tree. It worked fine, I got the amounts refunded as intended. Now, with LLMs, they are even more useful than what I experienced, as they actually understand your intent as well as a human would. If you disagree with that fundamental premise, then I'm not sure what to tell you other than to use GPT-4 via ChatGPT.
In short, just because you had bad experiences doesn't mean everyone else has as well.
patio11's recent "Seeing Like a Bank" makes a pretty persuasive argument that these kinds of first pass filters are in fact very important to ensure that costs can stay reasonable to you and I, the rare times we do actually have a problem and usually have to walk far up the chain of command to get it fixed.
The thing about that line of argument is that the balance between things being sorted by the bot vs needing a human is almost never right.
My current employer has a slack helpbot where you dm the bot and it does a first pass at trying to find the right ticket/form etc to solve your problem. If it can't, it opens a regular helpdesk ticket with the info you have given it so far and the helpdesk sorts your problem out. It's great.
Most corporate chatbots however are not like this. For example, when I went recently to resolve a problem with an insurance policy I got pushed on the website to the chatbot. After going through a bit of annoying to-ing and fro-ing the chatbot told me it couldn't do anything and I had to call up. At this point all the information I had given it while it was trying to resolve my problem is in the dumpster and as far as its concerned, job done. I however have wasted a bunch of time and am back to square 1. Worse than that, when I sit in the (now incredibly long) phone queue to speak to the few human helpdesk agents who remain I have to listen to the recording repeatedly telling me "why not use our super-helpful chatbot".
No, it's also about data entry. It would be a terrible waste for a human to be sitting there going: "No, that account ID isn't right either, please check again."
I work in a small team with increadibly good programmers. ChatGPT as of right now is not good enough to replace asking one of them for help when I am stuck. But it is good enough to replace stack overflow.
Look forward.
A year ago I would have so agreed with what you say. But look where we are already. And think about what will come. My god. We're in for a ride here. Saying that chatbots sucked last time we tried that is missing that a revolution is taking place. It's like saying solar will never work because a single solar panel cost a fortune back in the 70:s.
My take on this is that not only are chatbot coming back big. We're getting the droids from Star Wars within a decade or so.
But you still don't, or can't, solve the fundamental issue of giving chatbots actual power to do things. And, if you can give the chatbots power to do the thing the customer wants them the chat bot is likely the most inefficient way to do it. It should just be part of the website / app.
Giving them power you don't provide users means they need to be able to discriminate. Even humans are vulnerable to social engineering, but the nature of each human being different makes it not scalable. If you figure out an empowered chatbots prompt injection soft spot you could potentially scale the fraud which is a dangerous problem.
Well, maybe. Or it could be that the low hanging fruit has already been picked and the remaining 10% to get it real-world usable will be the thing that takes a long time (see VR, self driving cars, etc)
> But it is good enough to replace stack overflow.
For the kind of things I‘m using stackoverflow for I prefer it. It‘s much fast to google and scan the sf page than to wait for GPT to type out an answer and read that.
GPT is only useful if the code needs to be adapted, and I rarely use stackoverflow for that.
I'd suggest you try Copilot, specifically Copilot Chat. I was skeptical about it as ChatGPT feels like it had a lot of issues with code, but I tried it yesterday to build a small game (I do web APIs for my dayjob, so I have zero experience in that) and it honestly feels game changing,
It's like you have a mid-level developer sitting next to you, who has been using the libraries you are using for years.
It still hallucinates, and it still makes mistakes, but many questions I would have turned to Google to, it was able to answer right there and explain exactly why my code was not right. Also surprisingly good at figuring out logic issues, I always get confused if X and Y are up/down or left/right.
The main issue with traditional chatbots is they rarely provide extra value over simply reading the policy of a particular web property or doing things that you could look up yourself on your account. I don't need a UPS bot to ask for my tracking # to just regurgitate what I can see in 5 seconds. That is beyond frustrating to gate keep an agent that will actually access a driver or center who last had my seemingly lost package, or troubleshoot why the system hasn't shown movement in over a week.
The only time the human connection is helpful is when a business makes a mistake that can't really be addressed. A chatbot, esp. with the power of GPT-4, has the potential to be considerably more helpful than the average call center employee who likely is not a native speaker or your language.
A few months ago, I would’ve agreed with this in theory, but having interacted with an Apple chat bot recently that was fast, seemed empathetic, and immediately solved my problem, I do have to wonder if LLM powered support agents may finally swing this the other direction
Until this point in time I've been extremely sceptical about chat bots. But LLM is changing the playing field. Chat bots often don't make sense but I do think there are a lot of cases they can do things other interfaces would struggle with.
> that for many users it's about being the human connection of being listened to that is important
That may be a very small percentage of all users. What users seek is quick answer to the queries and resolution to the problem and then going back to their own life. Everyone hates waiting for an agent and call being on hold. Most hate rude or clueless staff.
People hated bots because they were slow and stupid. But I prefer doing all banking on app/site than talking to some human on bank. For info I would prefer to sift DuckDuckGo/Internet for 30-120 min before giving up and finding someone to talk to.
So, if an agent can solve my query faster and better than human, I'll prefer bot.
I would appreciate a situation where the chatbot enters into the conversation along with a human customer service representative. It‘s a conversation among three people with the chatbot having the option to ask the human and the human tuning in a bit when needed.
It's hilarious to me when people are bringing back chat bots as a concept.
We had chat bots a few years ago and it was something that almost all larger companies had built strategies around. The idea being that they could significantly reduce call centre staff and improve customer experience.
And it wasn't just that the quality of the conversations were poor it was that for many users it's about being the human connection of being listened to that is important. Not just getting an answer to their problem.