Chat in search integration for e-commerce platforms

A quick note to self: The previous week, I saw again an interesting way of bringing automated chat technology into a mobile interface using the “question” button in one of the prominent buttons on top of e-commerce giant Amazon. Ref: https://www.marketplacepulse.com/articles/amazon-adds-an-ai-shopping-assistant

I wonder why Amazon is choosing a separate entrance for its shopping advisory functionality. It could be integrated with the search bar on top of the page. However, at the moment of writing, they chose not to do so…

Also, a “questions” labeled button gives me the expectation of ending up on a FAQ page. Anyway, let’s see if this becomes a more used user pattern. Therefore I am just adding this article as another example of integrating chatbot technology into the app (but keeping search separate) of an e-commerce platform.

Search & chat coming together

Since ChatGPT was introduced to the big public in November 2022, I was amazed to see how a conversational interface has been massively adopted by people in such a short time (1 million users in just 5 days after launching). It really showed the power of the GPT-model to the average user, only visually using chat. For you who don’t know, earlier versions of the GPT model was already used in the Google Assistant “voice” technology. But then some extra layers of potential failures were added on top of that as well (noise filtering from speech, speak-to-text interpretation, etc.). But still, I was often amazed how well the model recognized written intents and entities in 2018.

With the mass adoption of conversational AI-tools like ChatGPT and BARD, the average user is getting used to asking longer questions to a system providing you with an on-par generated answer. Now I get intrigued by how this new behavior will affect the current search behavior of users. Will people start to ask longer questions in the search box as well? Or do conversational UIs need to come up with more smart ways to receive contextual information to a short question by adding buttons asking for context?

An interesting example I saw lately is the example of Klevu MOI where they introduced a toggle between chat and search. https://www.klevu.com/moi

Klevu MOI

Also see this video: https://www.youtube.com/watch?v=B3dRF7DQwho

Goodbye Actions. Hello useful voice systems.

The word is out, Google is sunsetting Conversational Actions on June 13th 2023. I fully understand Google’s choice as the current setup didn’t get the user traction it deserved. As a developer of several actions myself and as an experienced user I learned that it was hard to get buckled up for the actual core of a Google action without losing users during the conversation.

When I was working for Rabobank, a user first needs to set up a connection with his or her account. This step already caused many users to not join this channel as an updated smartphone with the Google Assistant app was needed. But even if installed it still took a lot of effort to connect with a dual-factor authentication system (Rabo Scanner) to access the information users wanted to request (their balance). Another step to bale out was the privacy consent, many users didn’t and still don’t trust big external parties with their financial data.

Also for my private projects like the 7 minutes workout action, I found that it was easier to have this video added to Youtube and ask for it to Google to play it using Youtube than to ask for the action itself.

The technology was just not ready for all these use cases. I found myself quickly irritated when Google Home misinterpreted the question or command I give. Also, waiting for the same (too long and slow) response when it gave me a wrong answer. However, I still think voice technology can add value to people’s lives.

Use cases I personally love to use in my home are: asking till what time a shop is open, playing music, asking for latest news, activating the lights for a certain setting, asking for the kids how animals or music instruments sound.

Other use cases I wished I could use it more often are when I am driving in my car. Can you imagine asking: “Can you navigate to a gasoline pump on our route?” or “Next song please?” or “When is the next drive-through restaurant on my route?”. But then answering your request in a correct way…

However, I disabled all voice assistants at the moment because of too many false positives in triggering the assistants or in the case of the car-based voice system: it just doesn’t understand. So, let’s focus on creating great usable web content (as mentioned here) and let’s try this technology after a few years again 🙂

Voice & search

Just a short update on my current position at bol.com.

Finding exactly what a user wants is hard. Even when you have visual clues available. In the past, I experimented with voice applications. In these customer journeys, I thought the biggest problem was with the availability of interface elements, a user can basically say everything to the voice interface. In the visual world, you can compare this to a user who can draw his own filtering and sorting interface. If you don’t exactly know what you are looking for and how things are called at the information-providing end you probably won’t find what you were looking for.

So my focus in information retrieval is currently shifted from voice to visual search at bol.com. It is still incredibly hard to understand users. The many features are made to help users find what they need and still people are creative enough to end up finding things they didn’t expected.