Welcome to the latest issue of the payments weekly. Today, we discuss the relatively recent launch of a generative AI tool that can assist with your payment analytics.
What's happening
Pagos, the payment intelligence company, introduced their new service Pagos Copilot, and it is one of my favorite payment product releases this year.
Zoom in
In the demo example, the payment analyst asked about their company's payment success rate, and the copilot returned a chart showing the authorization rate within the chat interface.
After receiving the high-level information the analyst has dived deeper into the decline reasons. Pagos generated charts with payment data, provided detailed answers for follow-up questions and links to documentation, and proposed a specific solution to resolve an issue. If you want to see more details watch a video, it is very well made.
This demo example shows how the new product solves multiple pain points. What required several custom SQL queries and hours of work in the past can now be done quickly by asking questions in the chat.
Bigger picture
There are many areas where generative AI can be used. This includes optimizing support requests, assisting with chargeback management processes, and generating documentation tailored to specific questions. Another recent example of using AU is Stripe's implementation of smart assistance in their Sigma query-based analytics tool.
Pagos has gone even further by introducing a full payment analytics copilot that allows you to interact with your payment data using a familiar chat-style interface.
What's next
I didn't find answers for a couple of specific questions I am very curious about.
Does the model operate on Pagos Cloud or third-party servers?
In the example provided, the Pagos system possesses internal knowledge about the authorization rate. If Pagos' knowledge base lacks a specific answer, how does Pagos Copilot handle this situation? Does it search the internet for information, potentially leading to AI hallucinations, or are the responses limited?
I believe Pagos can proactively identify a decline in success rate without users having to inquire about it specifically. Is it possible for Pagos to run this analysis in the background and suggest specific solutions?
Thanks so much for writing about Pagos Co-pilot - we are excited about it too. I thought I'd take a moment and actually address your 3 questions -- those are great questions and made us realize we could improve our documentation (which we did!) and clarify how things work.
Here are my answers:
1. Does the model operate on Pagos Cloud or third-party servers?
The Pagos Copilot's prompts are engineered inside our platform to contextualize the user’s input, and we leverage a 3rd party-hosted AI model to perform the evaluation and analysis—both for constructing data queries from the user’s input and operating on the returned data. We’ve taken special care to restrict the data shared with the model, so it is sandboxed and not directly available to the Large Language Model (LLM). This provides a means to ensure that the queried data doesn’t include identifiable information.
2. In the example provided, the Pagos system possesses internal knowledge about the authorization rate. If Pagos' knowledge base lacks a specific answer, how does Pagos Copilot handle this situation? Does it search the internet for information, potentially leading to AI hallucinations, or are the responses limited?
The responses are limited by design to prevent bad scenarios. Similarly, all specific, numerical responses are generated from data platform queries Copilot helps prepare, so there's no room for hallucination. Either the database queries can return data or there's no detailed response possible. We also configure Copilot to prioritize using relevant Pagos product documentation, blogs, and curated, relevant industry materials to provide additional context when handling open-ended questions. We don’t allow Pagos Copilot to search the internet.
.3 I believe Pagos can proactively identify a decline in success rate without users having to inquire about it specifically. Is it possible for Pagos to run this analysis in the background and suggest specific solutions?
Yes, connecting the natural language power of an LLM to our data observability and alerting platform is part of the ongoing evolution of the product and platform capabilities. We are thinking deeply about what makes sense to both push to a user and what is most helpful when they need to research or pull their data. We do believe there are opportunities to use LLMs to make this easier and increase user productivity and effectiveness.
Always happy to engage with any additional questions!