1 Comment

Thanks so much for writing about Pagos Co-pilot - we are excited about it too. I thought I'd take a moment and actually address your 3 questions -- those are great questions and made us realize we could improve our documentation (which we did!) and clarify how things work.

Here are my answers:

1. Does the model operate on Pagos Cloud or third-party servers?

The Pagos Copilot's prompts are engineered inside our platform to contextualize the user’s input, and we leverage a 3rd party-hosted AI model to perform the evaluation and analysis—both for constructing data queries from the user’s input and operating on the returned data. We’ve taken special care to restrict the data shared with the model, so it is sandboxed and not directly available to the Large Language Model (LLM). This provides a means to ensure that the queried data doesn’t include identifiable information.

2. In the example provided, the Pagos system possesses internal knowledge about the authorization rate. If Pagos' knowledge base lacks a specific answer, how does Pagos Copilot handle this situation? Does it search the internet for information, potentially leading to AI hallucinations, or are the responses limited?

The responses are limited by design to prevent bad scenarios. Similarly, all specific, numerical responses are generated from data platform queries Copilot helps prepare, so there's no room for hallucination. Either the database queries can return data or there's no detailed response possible. We also configure Copilot to prioritize using relevant Pagos product documentation, blogs, and curated, relevant industry materials to provide additional context when handling open-ended questions. We don’t allow Pagos Copilot to search the internet.

.3 I believe Pagos can proactively identify a decline in success rate without users having to inquire about it specifically. Is it possible for Pagos to run this analysis in the background and suggest specific solutions?

Yes, connecting the natural language power of an LLM to our data observability and alerting platform is part of the ongoing evolution of the product and platform capabilities. We are thinking deeply about what makes sense to both push to a user and what is most helpful when they need to research or pull their data. We do believe there are opportunities to use LLMs to make this easier and increase user productivity and effectiveness.

Always happy to engage with any additional questions!

Expand full comment